Search analyst Glenn Gabe warns that as platforms like ChatGPT, Perplexity, and Claude evolve, their biggest challenge won’t be innovation; it will be credibility. Without strong indexing, site-quality evaluation, and anti-spam systems, AI Search could repeat the mistakes that once undermined traditional search.

Glenn Gabe has spent years studying how Google ranks and filters content. His latest analysis focuses on AI search.
In a detailed post, Gabe explains how companies are attempting to manipulate the visibility of AI-generated results. Some are producing scaled content meant to catch large language models’ attention. Others are experimenting with tactics that might confuse or mislead these systems.
The concern, he says, is that this could create long-term damage both for brands and for AI platforms still finding their footing.
“It works until it doesn’t,” Gabe notes, referring to the same pattern that played out years ago in SEO when shortcuts led to penalties and loss of visibility.
AI Search, he suggests, is now at a similar turning point.
Why AI Search Needs Its Own Infrastructure
Many AI tools (including ChatGPT and Perplexity) still depend on existing web indexes built by search engines such as Google and Bing. That dependency provides access to enormous data sets, but it also exposes them to restrictions and policy changes.
Recent reports revealed that both ChatGPT and Perplexity were scraping Google’s search results, prompting Google to clamp down by disabling the “num=100” parameter that allowed mass scraping.
To avoid such dependencies, AI Search platforms are now building their own indexes. Perplexity recently announced that it had amassed hundreds of billions of documents in its proprietary index, indicating a significant change in how AI systems will collect and prioritise information.
“With an index covering hundreds of billions of webpages”. Well, now we know the size of their search index -> Introducing the Perplexity Search API https://t.co/EpVzQ6wfqF pic.twitter.com/bzibuVrwXD
— Glenn Gabe (@glenngabe) September 26, 2025
Gabe believes this direction is inevitable. AI companies won’t want to rely indefinitely on competitors’ data. But independence comes with responsibility: they must now learn to separate trustworthy content from manipulation at scale.
The Spam Problem Is Already Emerging
AI-generated search responses are still new, yet signs of gaming are everywhere.
Gabe lists several tactics making the rounds in digital circles: hidden text meant only for AI crawlers, cloaked pages, and mass-produced content targeting every possible query within a topic.
He describes this as a short-sighted strategy.
As AI Search platforms develop spam filters and detection systems similar to Google’s SpamBrain, sites using these methods could face penalties or be excluded entirely.
The problem isn’t new. Google and Bing have spent decades refining their anti-spam tools. But for AI platforms, this is uncharted territory. As Gabe points out, models like ChatGPT and Perplexity are still in early stages of moderation.
Lily Ray has also highlighted the risks Gabe describes. She has emphasized the danger of hastily adopted tactics that boost short-term visibility but reduce long-term credibility.
My bets on what the next Google core algorithm update will devalue, based on my experience working closely on Google algo updates for 15+ years:
– an influx of thin, robotic pages meant to influence LLMs
– repetitive “best” listicle articles, especially self-serving ones
-…— Lily Ray 😏 (@lilyraynyc) October 11, 2025
Rand Fishkin has also spoken publicly about how current large language models do not yet have the sophistication to detect manipulation reliably.
Learning From Google’s Experiences
Gabe compares current AI Search developments with Google’s historic algorithm updates, like Panda and Penguin, which quickly penalised low-quality sites.
He predicts similar patterns ahead. Once AI Search platforms start implementing site-level quality scoring and authority systems, low-quality or manipulative sites will lose visibility fast.
This change will prompt AI companies to take credibility more seriously.
As Gabe points out, if unreliable or misleading sources keep showing up in AI responses, users’ trust will decline. Trust remains the only genuine currency for AI platforms at present.
Authority, Links, and the Missing Signal
Another significant challenge is authority. Traditional search engines assess credibility using the “link graph”, the network of links connecting trustworthy sites. Google’s PageRank relies on this approach. AI platforms, however, lack a comparable system.
Gabe references insights from Mark Williams-Cook, who explained that without a link graph, AI tools struggle to identify authoritative sources.
Some platforms reportedly purchase third-party link data, but Gabe considers this a temporary fix. This problem is important for topics like finance, health, and law, classified by Google as YMYL. In these fields, credibility is essential, as relying on low-quality sources can mislead users or spread false advice.
What Happens Next
Gabe predicts that AI Search platforms will soon release major updates aimed at combating manipulation, similar to Panda or Penguin, but tailored for AI.
When these updates roll out, industry visibility could undergo significant shifts.
Unlike traditional SEO, though, there’s a lack of reliable tools to track visibility in AI outputs.
Current analytics platforms aren’t designed to measure how often or where a brand appears in AI-generated responses, making it hard for businesses to understand when or why their visibility fluctuates.
What Companies Should Do Now
While AI Search is still taking shape, there are smart steps companies can take to stay on the right side of progress:
- Prioritize credibility. Publish content created by real experts with verifiable credentials.
- Avoid manipulation. Don’t use hidden text, keyword stuffing, or automated spam tactics.
- Strengthen your site reputation. Keep pages updated, accurate, and consistent in tone and authority.
- Track AI exposure where possible. Experiment with available visibility tools and monitor how your brand appears in AI responses.
- Be ready to adapt. When anti-spam updates roll out, honest content will endure while shortcuts fail.
Key Takeaways
- AI Search platforms are building independent indexes, moving away from reliance on Google and Bing.
- Spam tactics are increasing, but anti-spam systems will soon catch up.
- Site-level trust and authority will define visibility in the next phase of AI Search.
- Massive volatility is likely once AI platforms roll out quality-focused updates.
- Long-term success depends on expertise and authenticity, not manipulation.
Zulekha
AuthorZulekha is an emerging leader in the content marketing industry from India. She began her career in 2019 as a freelancer and, with over five years of experience, has made a significant impact in content writing. Recognized for her innovative approaches, deep knowledge of SEO, and exceptional storytelling skills, she continues to set new standards in the field. Her keen interest in news and current events, which started during an internship with The New Indian Express, further enriches her content. As an author and continuous learner, she has transformed numerous websites and digital marketing companies with customized content writing and marketing strategies.