AI Chatbots Misfire: Alarming Study Unveils Distortion of News by Popular Assistants

AI Chatbots Misfire: Alarming Study Unveils Distortion of News by Popular Assistants

Major International Study Finds AI Chatbots Frequently Misrepresent News

October 22, 2025 — Twenty-two global public broadcasters, including Deutsche Welle, ran a study. They tested popular AI chatbots like ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity AI. The study shows these bots give news that is wrong. They mix fact with opinion and match the wrong sources. Such errors weaken trust and disturb open discussion.

AI Chatbots Often Fail to Deliver Accurate News

The team checked about 3,000 answers from four well-known AI helpers. They asked clear and common news questions. Experts from the BBC, NPR, and DW looked at each answer. They checked if facts were true and sources were clear.

The study found 45% of the answers have one or more problems. In 31% of cases, the source was wrong. Twenty percent had major fact errors. DW found that 53% of its sample had big issues. In 29% of those, the facts did not match the reality.

Errors include:
• Olaf Scholz was named as Germany’s leader, though Friedrich Merz had taken charge.
• Jens Stoltenberg was said to lead NATO, but Mark Rutte was in charge instead.

Widespread Problems Across Languages and Regions

The errors appear in many languages and countries. Jean Philip De Tender, head at the European Broadcasting Union, said these flaws cross borders. He warned that such issues can lower trust and stop many from joining the civic debate.

Recent reports show that 7% of news readers use AI chatbots, and among those under 25, the number is 15%. This makes wrong news a real worry.

Comparison to Earlier Studies Shows Little Change

The same tests were used as in a February 2025 BBC study. In that work, wrong answers were common. For example, Google Gemini had 72% of its answers with source issues, a rate like before.

Peter Archer, BBC director for AI, said, "We see hope in AI news. We want to share good stories with our audience. Yet, people need to trust what they see. Some tests show small progress, but many problems still stay."

Calls for Action from Governments and AI Developers

Media groups ask governments to use laws that keep news clear and true. The EBU calls on regulators to check AI digital services more strictly.

The EBU and partners have a campaign called "Facts In: Facts Out." Its simple rule is: "Put in facts, get out facts." They ask AI companies to take care of the news they share.

A representative from OpenAI said that they help users find clear summaries and proper credits. This comment did not mention the mistakes found in the study.

Conclusion

AI chatbots now bring news to many people. But wrong or mixed news can harm trust and civic life. This study shows the need for close checks, stronger rules, and joint work among news groups, governments, and AI teams to keep news clear and true.


For more details, read the full study on the Deutsche Welle and European Broadcasting Union websites.