Google’s AI Overviews are putting people at risk of harm by surfacing false and misleading health information at the top of search results, according to an investigation by The Guardian.
The findings challenge Google’s claims that its AI-generated summaries are reliable, with experts warning that inaccurate advice.
It could lead to delayed treatment, misdiagnosis and potentially life-threatening outcomes.
The investigation reveals multiple cases in which Google’s generative AI summaries provided incorrect medical guidance on cancer, liver disease, and mental health, issues where accuracy and context are critical.

What Are Google AI Overviews and Why Are They Under Scrutiny?
Google AI Overviews are generative AI summaries that appear at the top of search results, designed to give users quick snapshots of “essential” information.
Powered by large language models, these summaries pull content from multiple sources and present it as a single, authoritative answer.
While Google has repeatedly described AI Overviews as “helpful” and “reliable,” the Guardian’s investigation suggests that the format may be particularly dangerous.
And when applied to health-related searches, where nuance, context and individual variation are essential.
Unlike traditional search results, AI Overviews compress complex medical guidance into simplified statements that users may treat as definitive advice.
How Did the Guardian Identify Harmful Health Information?
The Guardian’s health team reviewed AI Overviews triggered by common medical searches after concerns were raised by charities, clinicians and patient advocacy groups.
The investigation found several examples where summaries contained errors that experts described as “alarming,” “completely wrong,” and “really dangerous.”
Because AI Overviews appear before organic results, users may see and trust these summaries without clicking through to authoritative medical sources or consulting healthcare professionals.
Why Was the Pancreatic Cancer Advice Described as “Really Dangerous”?
One of the most serious examples involved pancreatic cancer nutrition advice. Google’s AI Overview incorrectly advised patients to avoid high-fat foods.
Experts from Pancreatic Cancer UK said this guidance was the opposite of what patients should follow.
People with pancreatic cancer often struggle to absorb nutrients and require high-calorie, high-fat diets to maintain weight and tolerate treatment.
Anna Jewell, the charity’s director of support, research and influencing, warned that following the AI advice could leave patients undernourished, unable to tolerate chemotherapy, or too weak for potentially life-saving surgery.
How Did AI Overviews Mislead Users About Liver Disease?
Another concerning case involved searches for “normal” liver blood test ranges.
Google’s AI Overview produced a list of numbers without adequate explanation or context, failing to account for factors such as age, sex, ethnicity, or nationality.
Pamela Healy, chief executive of the British Liver Trust, said the summaries were dangerous because many people with liver disease show no symptoms until advanced stages.
Incorrect reassurance could lead patients to skip follow-up appointments or ignore serious conditions.
She warned that AI-generated “normal” ranges varied significantly from medically accepted standards, increasing the risk of false reassurance.
What Errors Were Found in Women’s Cancer Searches?
Searches related to women’s cancers also produced misleading results.
A query for “vaginal cancer symptoms and tests” returned an AI summary stating that a Pap test was used to detect vaginal cancer.
Athena Lamnisos, chief executive of The Eve Appeal, said this was completely incorrect.
Pap tests screen for cervical cancer, not vaginal cancer, and relying on this misinformation could delay diagnosis.
She also raised concerns about inconsistency, noting that repeating the same search produced different AI summaries pulling from different sources.
“People are getting a different answer depending on when they search,” she said, calling the situation unacceptable.
Are Mental Health Searches Also Affected?
The Guardian found that AI Overviews also produced misleading or harmful summaries for mental health conditions such as psychosis and eating disorders.
Stephen Buckley, head of information at Mind, said some of the advice surfaced by AI Overviews was “very dangerous” and could discourage people from seeking professional help.
He added that AI summaries often strip away crucial nuance and can reinforce stereotypes or stigma, especially when drawing from poorly contextualised sources.
Why Is AI Health Misinformation Especially Risky?
Health information is uniquely sensitive. People often search online during moments of fear, pain, or uncertainty and they may treat the first answer they see as authoritative.
Sophie Randall, director of the Patient Information Forum, said the investigation showed how AI Overviews can elevate inaccurate health information above trusted sources, creating real-world risk.
Stephanie Parker, director of digital at Marie Curie, added those concerns, noting that people searching in crisis may not question what an AI summary tells them.
How Did Google Respond to the Findings?
Google disputed aspects of the investigation, stating that some examples were based on incomplete screenshots and that, where assessable, AI Overviews linked to reputable sources and encouraged users to seek expert advice.
A Google spokesperson said the company invests heavily in improving AI Overview quality, particularly for health topics, and claimed that the accuracy rate is comparable to long-established features like featured snippets.
Google also said it would take action when AI Overviews misinterpret content or miss context, in line with its policies.
Why Are Critics Still Unconvinced?
Health experts argue that even occasional inaccuracies are unacceptable when AI summaries are presented as authoritative medical guidance.
Unlike featured snippets, which quote a single source, AI Overviews synthesise information and can collapse disagreement, uncertainty, or nuance into a confident-sounding answer. That design choice, critics say, increases the risk of harm when errors occur.
The Guardian’s investigation also highlights inconsistency, AI Overviews can change from one search to the next, meaning users may receive different medical advice for the same question.
Is This Part of a Broader AI Trust Problem?
The findings add to growing evidence that generative AI systems struggle with high-stakes factual reliability.
Previous studies have shown AI chatbots providing incorrect financial advice and inaccurate summaries of news events.
Health information amplifies the stakes. A misleading answer is not just wrong, it can influence treatment decisions, delay care, or discourage people from seeking help altogether.
What Are Experts Calling for Now?
Health charities and patient advocates are urging greater caution in deploying generative AI for medical information.
Many argue that AI summaries should be restricted, heavily qualified, or removed entirely for sensitive health queries.
They also stress the importance of directing users toward qualified healthcare professionals rather than presenting simplified AI-generated guidance as sufficient.
What Does This Mean for the Future of AI in Search?
The controversy underscores a central tension in AI-powered search: speed and convenience versus accuracy and safety.
While AI Overviews can make information more accessible, the Guardian’s findings suggest that current safeguards may be insufficient for complex, high-risk topics like health.
As regulators, clinicians, and patient groups scrutinise AI’s role in public information, Google and other platforms may face increasing pressure to rethink how and where generative AI is used.
Key Takeaway
The Guardian’s investigation raises serious questions about whether AI Overviews are ready to handle medical information responsibly.
For many users, AI summaries are replacing traditional search results as the first and sometimes only, source of information.
When those summaries are wrong, misleading, or inconsistent, the risk extends far beyond misinformation into real-world harm.
Until accuracy, transparency, and accountability improve, experts warn that AI-generated health advice should be treated with caution, not confidence.
Dipti Arora
AuthorDipti Arora is a Senior Content Writer with over seven years of experience creating impactful content across Digital Marketing, SEO, technology, and business domains. She has a strong background in managing news verticals and delivering editorial excellence. Dipti has contributed to leading publications such as The Times of India and CEO News, where her research-driven storytelling and ability to simplify complex subjects have consistently stood out. She is passionate about crafting content that informs, engages, and drives meaningful results.