SEO strategist Lily Ray sparked a heated debate after discovering that Perplexity AI fabricated a Google algorithm update and cited fake sources to back it up. The incident shows how generative AI can spread misinformation with confidence and why that matters far beyond the SEO industry.
Lily Ray, a veteran SEO voice with a reputation for sharp fact-checking, asked Perplexity AI for updates on search and AI news.
Instead of a useful summary, the platform confidently told her about a brand-new Google algorithm update.

The truth? That update never happened.
Even worse, the tool “supported” its claim with two citations – AI-written articles sitting on marketing agency blogs. These weren’t breaking reports from industry leaders. They were entirely fabricated posts created by AI, repeating an update that didn’t exist.


She took to X, saying: “The snake is eating its own tail.”
This is so bad – I asked @perplexity_ai about recent SEO/AI news and it responded with a information about a non-existent, made-up algorithm update
Perplexity cited two AI-generated articles on digital marketing agency domains that made up information about an algorithm update… pic.twitter.com/A30ml152rq
— Lily Ray 😏 (@lilyraynyc) September 26, 2025
The Thread That Caught Fire
Ray’s post struck a nerve. Replies rolled in quickly, and each one added a different layer of concern.
“Always ask your AI to double check the facts,” wrote Ryan (@sonicshifts), suggesting the tool could have caught its own mistake. Ray shot back: “That did not work.”
Then came the broader warnings. Katherine Argent (@effthealgorithm) looked beyond SEO: “And now it’s spawning across the web as sites treat it like fact. This is how a civilization crumbles.” Dramatic, yes, but the point landed. Once misinformation slips into circulation, it multiplies.
Some replies raised technical questions.
Pushpendra Singh asked: “But I have seen a blog that quoted this, so is it the blog that is spreading false information, or what?” It was a fair dilemma. Was the AI fabricating from scratch, or recycling something already floating online? Either way, the loop was closed: fake content feeding fake citations.
Others voiced frustration that felt heavier.
Gaetano DiNardi commented that “Most people do not understand how flawed these AI models are. They are great at answering timeless generic questions but unbelievably inaccurate for most other things.”
It was obvious at this point that AI hallucinations happen all the time, but when they come wrapped in such certainty, they become far more harmful.
Why SEO Was the Perfect Test Case
To someone outside the industry, a fake Google update might sound like a minor mix-up. But in SEO, it hits like a sudden stock market crash.
Google’s algorithm updates determine which sites rise or fall in search rankings.
When a major change is announced, agencies shift entire strategies.
Some clients see a traffic spike overnight. Others lose a third of their visitors in a week. Millions of dollars can hinge on these updates.
So when an AI tool invents an update that never happened, the consequences aren’t abstract. Companies might waste time and money fixing problems that don’t exist. Analysts could send clients misleading reports. Teams might make staffing or budget decisions based on nothing but a lie.
This is a window into how misinformation can warp decision-making in fields where every click counts.
From Harmless Hallucination to Costly Error
AI researchers call these mistakes “hallucinations,” but that word makes them sound whimsical.
What Ray uncovered was no daydream. It didn’t say “maybe” or “rumor has it.” It stated the update as fact, then supplied links that looked legitimate, but weren’t. To anyone skimming, those sources would have seemed convincing.
And that’s the deeper issue: confidence.
This is where trust fractures. If AI systems can fabricate “facts” this seamlessly, and if even professionals have to dig to uncover the lie, how are casual users supposed to tell the difference?
An Old Problem, Supercharged
Misinformation in SEO is nothing new.
The industry has buzzed with rumors about hidden signals, secret penalties, and unconfirmed updates for years. Google rarely explains its algorithm changes in detail, which leaves professionals piecing together patterns and guessing at causes.
But those guesses used to be human. They were flawed, but grounded in data. Now, AI has added a new layer of noise. It generates rumors, cites other AI-written content, and creates the illusion of authority without a human in sight.
The result is an echo chamber where falsehoods are harder to separate from reality.
A Familiar Pattern, A Faster Machine
We’ve seen this cycle before. Social media supercharged rumor mills in the 2010s, spreading conspiracy theories and fake stories faster than corrections could catch up.
AI takes that same dynamic and accelerates it. Instead of waiting for a person to misinterpret or exaggerate, the system generates the rumor itself. Instead of needing others to repeat it, it cites its own inventions. The speed and scale of error multiply.
As Wordmetrics stated in the comments on Ray’s post:
The Trust Problem for AI Search
Perplexity AI has garnered attention as a potential alternative to Google. Investors like it. Early adopters love it. But search tools aren’t judged on novelty; they’re judged on trust.
Ray’s test made one thing painfully clear that is, Perplexity has a trust problem. And in search, that’s the only problem that matters.
Where This Leaves the Rest of Us
Should we abandon AI tools? No.
They’re fast, useful, and deeply integrated into how we work today. But using them blindly is dangerous.
Here are a few lessons worth keeping in mind:
- Check the big claims. If an AI tool says Google updated its algorithm, look at Google’s official channels before reacting.
- Interrogate the sources. A link doesn’t equal credibility. Who published it? Does it actually confirm the claim?
- Use AI for brainstorming, not final answers. It’s a helper, not an authority.
- Follow experts. Trusted voices who verify information are still the best compass.
- Call it out. Transparency matters. Ray’s post helped prevent the rumor from spreading further.
What AI Companies Must Fix
The responsibility doesn’t end with users. Platforms like Perplexity have to recognize that polished hallucinations are worse than clumsy errors. They need guardrails:
- Fact-checking layers to cross-reference important claims.
- Filters that detect AI-generated content in citations.
- Clearer disclosures about how answers are generated.
Without those fixes, the loop will only tighten. Each hallucination feeds the next, and trust continues to erode.
Why Ray’s Warning Resonates
Ray exposed how fragile the line between fact and fiction becomes when machines cite each other unchecked.
Her post got a lot of attention, with responses ranging from practical advice on checking AI-generated information to warnings about how easily misinformation can spread.
Of course, AI tools are powerful, but power without trust is useless. If we don’t build systems that prioritize truth, the snake won’t just eat its tail; it’ll swallow our confidence in information itself.
Key Takeaways
- Lily Ray exposed Perplexity AI fabricating a Google update and citing AI-generated blogs.
- False algorithm updates can cost businesses time, money, and credibility.
- Industry reactions highlighted growing frustration with AI hallucinations.
- Users must verify AI claims, especially in high-stakes fields like SEO.
- AI platforms need stricter safeguards to maintain trust.
Dileep Thekkethil
AuthorDileep Thekkethil is the Director of Marketing at Stan Ventures, where he applies over 15 years of SEO and digital marketing expertise to drive growth and authority. A former journalist with six years of experience, he combines strategic storytelling with technical know-how to help brands navigate the shift toward AI-driven search and generative engines. Dileep is a strong advocate for Google’s EEAT standards, regularly sharing real-world use cases and scenarios to demystify complex marketing trends. He is an avid gardener of tropical fruits, a motor enthusiast, and a dedicated caretaker of his pair of cockatiels.