Artificial intelligence assistants distorted or misrepresented news content in almost half of their responses, a new European Broadcasting Union (EBU) and BBC study revealed on Wednesday.
Study Covers 3,000 AI-Generated Answers
Researchers reviewed 3,000 news-related responses from major AI tools, including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity. They assessed factual accuracy, source attribution, and the ability to separate fact from opinion.
Findings Reveal Global Consistency Issues
Covering 14 languages, the study found widespread inconsistencies in AI-generated news content. The results raised alarms among media regulators and journalists already concerned about misinformation spread through generative AI platforms.
Transparency and Accountability Urged
The EBU and BBC emphasized the need for transparency in how AI assistants process and present news. They warned that unchecked AI use could blur lines between verified journalism and synthetic information.
High Error Rates Identified
According to the study, 45% of AI-generated responses contained at least one major issue, while 81% displayed some kind of problem. Researchers cited these findings as evidence that users face significant risks when relying on AI for news consumption.
Companies Respond to Findings
Reuters reached out to OpenAI, Microsoft, Google, and Perplexity for comments. Google’s Gemini stated it welcomes user feedback to improve accuracy. OpenAI and Microsoft acknowledged hallucinations as a known issue they continue to address. Perplexity claimed its “Deep Research” mode achieves 93.9% factual accuracy.
Sourcing Errors a Major Concern
A third of all responses showed serious sourcing issues, including missing or misleading attributions. Gemini recorded the highest rate, with 72% of its answers showing sourcing errors—far higher than other assistants, which remained below 25%.
Accuracy Problems Across Platforms
About 20% of responses contained factual inaccuracies, such as outdated or incorrect information. For instance, Gemini incorrectly reported changes to disposable vape laws, while ChatGPT listed Pope Francis as alive months after his reported death.
Study Involved 22 Media Organisations
The research included 22 public-service media outlets from 18 countries, including France, Germany, Spain, Ukraine, Britain, and the United States.
Public Trust at Stake
The EBU warned that as AI assistants increasingly replace traditional search engines, public trust in news may erode. “When people don’t know what to trust, they end up trusting nothing at all,” said EBU Media Director Jean Philip De Tender.
Young Users Rely Heavily on AI for News
According to the Reuters Institute’s Digital News Report 2025, about 7% of online news consumers and 15% of people under 25 use AI assistants for news updates.
Call for Stronger Regulation
The EBU-BBC report urged AI developers to enhance accuracy and be held accountable for misleading or incorrect information in news responses.

