- Almost half of all AI assistant responses about the news contain major errors, a major international study has found
- The factual, procurement or contextual issues were evident across 14 languages and 18 countries
- Gemini fared worst with twice as many significant problems compared to the competition
When you ask an AI assistant about news and current events, you can expect a detached, authoritative answer. But according to a wide-ranging international study led by the BBC and coordinated by the European Broadcasting Union (EBU), almost half of the time these answers are wrong, misleading or simply made up (anyone who has dealt with the nonsense of Apple’s AI-written headlines can relate).
The report dug into how ChatGPT, Microsoft Copilot, Google Gemini and Perplexity handle news queries across 14 languages in 18 countries. The report analyzed over 3,000 individual responses from the AI tools. Professional journalists from 22 public media outlets rated each answer for accuracy, buyability and how well it distinguished news from opinion.
The results were grim for those who relied on AI for their news. The report found that 45% of all responses had a significant problem, 31% had procurement issues and 20% were simply inaccurate. This isn’t just a matter of one or two embarrassing mistakes, like mistaking Belgium’s prime minister for the frontman of a Belgian pop group. The research found deep, structural problems with how these assistants process and deliver news, regardless of language, country or platform.
In some languages, the assistants directly hallucinated details. In others, they attributed citations to outlets that had not published anything even close to what was being cited. Context was often lacking, with the assistants sometimes providing simplified or misleading overviews instead of decisive nuances. In the worst cases, it can change the meaning of an entire news story.
Not all assistants were equally problematic. The twins failed in a staggering 76% of the answers, mostly due to missing or poor source.
Unlike a Google search, which lets users sift through a dozen sources, a chatbot’s answer often feels definitive. It reads with authority and clarity, giving the impression that it has been fact-checked and edited, when in reality it may be little more than a fuzzy collage of half-remembered summaries.
That’s part of the reason the stakes are so high. And why even partnerships like those between ChatGPT and Washington Post cannot solve the problem completely.
AI news competence
The problem is obvious, especially given how quickly AI assistants are becoming the preferred interface for news. The study cited the 2025 Pakinomist Institute’s Digital News Report estimates that 7% of all online news consumers now use an AI assistant to get their information, and 15% of those under 25. People are already asking AI to explain the world to them, and AI is getting the world wrong at an alarming rate.
If you’ve ever asked ChatGPT, Gemini, or Copilot to summarize a news event, you’ve probably seen one of these imperfect responses in action. ChatGPT’s difficulty in searching for news is well known at this point. But maybe you didn’t even notice. That’s part of the problem: these tools are often wrong with such fluency that it doesn’t feel like a red flag. This is why media knowledge and ongoing control are essential.
To try to improve the situation, the EBU and its partners released a “News Integrity in AI Assistants Toolkit”, which acts as an AI reader starter kit designed to help developers and journalists alike. It outlines both what makes a good AI response and what kinds of errors users and media watchdogs should be looking for.
Even as companies like OpenAI and Google push ahead with faster, smarter versions of their assistants, these reports show why transparency and accountability are so important. That doesn’t mean AI can’t be useful, even in curating the endless firehose of news. This means that for now it should come with a disclaimer. And even if it doesn’t, don’t assume the Assistant knows best – check your sources and stick to the most reliable ones, like TechRadar.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



