AI-powered chatbots, once seen as the future of fast and efficient news delivery, are now facing serious scrutiny after a BBC study found they frequently distort facts.
The investigation analyzed the accuracy of leading AI models—ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity—when summarizing news articles, uncovering significant errors in more than half of their responses.
This alarming discovery raises concerns about misinformation, AI ethics, and the broader implications for the media industry and society.
AI’s Battle With Truth: How Big Is the Problem?
The BBC study tasked these AI chatbots with summarizing 100 of its news articles, and the results were deeply concerning.
A staggering 51% of AI-generated responses contained major inaccuracies. Even more troubling, 19% of responses citing the BBC included incorrect numbers, dates, or facts, while 13% contained manipulated or completely fabricated quotes.
One standout error came from Google’s Gemini, which falsely claimed that the UK’s National Health Service (NHS) advises against vaping to quit smoking.
In reality, the NHS actively recommends vaping as a harm-reduction method. In another shocking case, ChatGPT incorrectly stated in December 2024 that Ismail Haniyeh was a Hamas leader, despite his assassination in July 2024.
Gemini was the worst performer in the study, with 46% of its summaries flagged for major accuracy concerns. These findings highlight a significant problem—AI chatbots, increasingly relied upon for information, are misleading users at an alarming rate.
From News to Misinformation: The Dangers of AI Distortions
The impact of AI-generated misinformation is far-reaching. As more people turn to AI chatbots for quick news updates, false or misleading information can shape public perceptions, fuel conspiracy theories, and even influence elections or policy decisions.
Unlike human journalists, AI lacks critical thinking and editorial judgment, meaning errors can spread unchecked.
Deborah Turness, CEO of BBC News and Current Affairs, warned about the real-world consequences.
“We live in troubled times, and how long will it be before an AI-distorted headline causes significant real-world harm?” she asked, calling on tech companies to take immediate action to address these accuracy issues.
A History of Flawed Reporting
This isn’t the first time AI has faltered in news reporting. Last year, Apple had to pause its AI-generated news notifications after it was caught inaccurately rewriting BBC headlines.
OpenAI, Google, and Microsoft have also been criticized for AI hallucinations—where AI generates false or misleading information as though it were fact.
The BBC’s latest findings add to growing concerns over the reliability of AI in journalism. While AI technology can enhance efficiency and accessibility, it also introduces significant risks, particularly when it comes to disseminating information on crucial topics.
What’s Next? Will AI News Ever Be Trustworthy?
Tech companies now face immense pressure to fix these glaring issues before AI-generated misinformation spirals out of control.
Regulatory bodies may soon introduce stricter guidelines on AI-generated content, particularly in journalism, to ensure transparency and accountability.
As AI technology advances, developers must shift their focus from speed to accuracy. Possible solutions include clearer disclaimers on AI-generated content, improved fact-checking mechanisms, and stricter training data controls to prevent misinformation.
How You Can Protect Yourself From AI Misinformation
Here’s how you can stay informed and safeguard yourself against misleading AI-generated news:
Verify Information – Always cross-check news from multiple reputable sources before believing or sharing it.
Be Wary of AI Summaries – If a news summary appears questionable, find the original article for confirmation.
Demand AI Accountability – Hold tech companies responsible for improving AI accuracy and transparency.
Stay Informed on AI Policies – Follow developments in AI regulation and ethical debates surrounding AI-generated content.
Support Reliable Journalism – Subscribe to and support credible news organizations that prioritize fact-checking and responsible reporting.
Key Takeaways
- AI chatbots distort news as over 50% of AI-generated summaries contained major errors.
- Google’s Gemini performed worst with nearly half of its summaries inaccurate.
- Misinformation spreads fast as AI-generated false news can mislead the public and influence decisions. AI journalism is unreliable, with past failures proving ongoing risks.
- Regulations are needed as experts call for stricter oversight to ensure AI accuracy.
Dileep Thekkethil
AuthorDileep Thekkethil is the Director of Marketing at Stan Ventures, where he applies over 15 years of SEO and digital marketing expertise to drive growth and authority. A former journalist with six years of experience, he combines strategic storytelling with technical know-how to help brands navigate the shift toward AI-driven search and generative engines. Dileep is a strong advocate for Google’s EEAT standards, regularly sharing real-world use cases and scenarios to demystify complex marketing trends. He is an avid gardener of tropical fruits, a motor enthusiast, and a dedicated caretaker of his pair of cockatiels.