Table of Contents


Want to Boost Rankings?
Get a proposal along with expert advice and insights on the right SEO strategy to grow your business!
Get StartedNew research reveals that large language models, such as ChatGPT, Gemini, and Claude, are susceptible to influence from brand mentions, freshness signals, and social chatter, thereby opening the door to manipulation and raising concerns about fairness in AI-driven search.
A recent investigation led by marketer and SparkToro co-founder Rand Fishkin has shed light on how easily AI-generated answers can be shaped by subtle signals across the web.
The findings, based on coordinated experiments and marketing observations, suggest that LLMs display noticeable bias toward certain types of content.
Free SEO Audit: Uncover Hidden SEO Opportunities Before Your Competitors Do
Gain early access to a tailored SEO audit that reveals untapped SEO opportunities and gaps in your website.
When asked questions like “What’s the best product, service, or agency?” these AI tools tended to favor brands mentioned repeatedly across online sources, even when those sources weren’t particularly trustworthy.
Mentions on Reddit and YouTube, as well as content labeled with recent publication dates, carried disproportionate influence on what brands appeared in AI-generated lists.
We’ve learned more about how to appear in AI answers thanks to some clever experiments, research, and posts from folks in the SEO world. #5minutewhiteboard pic.twitter.com/OK4ONAHuDz
— Rand Fishkin (follow @randderuiter on Threads) (@randfish) October 9, 2025
Why “Recency” and “Repetition” Matter
Fishkin’s findings align with a pattern many digital marketers have observed because AI systems appear to interpret frequency and recency as signs of relevance.
Unlike Google Search, which uses complex spam filters and credibility checks, LLMs often rely on the text patterns and timestamps embedded in their training or retrieval data.
One experiment cited by Fishkin revealed that falsely updating publication dates on articles could dramatically improve their visibility in AI responses, even if the underlying content was outdated or misleading.
In essence, the newer something looks online, the more weight it carries in an AI-generated answer.
This behavior has created what some in the SEO world call “AI visibility hacking,” where marketers publish or republish large volumes of content filled with strategic brand mentions, hoping to appear in AI responses that users treat as authoritative recommendations.
The Ethics Question
That strategy has divided opinion.
Some marketers see this as a logical extension of SEO, simply adapted for a new generation of search tools.
Others, however, worry that it risks turning AI-generated information into an echo chamber of self-promotion.
In Fishkin’s report, one example stood out because a Reddit moderator allegedly used their position to attack a competing startup, CodeSmith, a coding bootcamp. By posting daily negative comments for over a year, the moderator, who was identified as a co-founder of a rival company, managed to drastically damage CodeSmith’s reputation.
In <2yrs, the cofounder of a Codesmith competitor managed to nearly destroy the company.
How? He became the moderator of a learn-to-code subreddit, posted relentlessly, took down competing posts, and weaponized the sub against them: https://t.co/RpE5MU0NvR
— Rand Fishkin (follow @randderuiter on Threads) (@randfish) October 8, 2025
Because Reddit content holds significant weight in how LLMs assess public sentiment, those comments not only hurt CodeSmith’s Google visibility but also influenced how AI models described the company.
According to Fishkin, the coordinated campaign led to an 80% drop in revenue and dozens of lost jobs.
Why AI Tools Are Vulnerable
The reason this manipulation works lies in how LLMs gather and rank information. These models are trained on vast quantities of online data, but they don’t inherently distinguish credible sources from fabricated or biased ones.
Although developers at OpenAI, Google, and Anthropic have implemented retrieval safeguards, these tools remain vulnerable to reputation gaming.
AI models prioritize association strength, how often a brand or idea appears near positive context, over deeper evaluations of accuracy or ethics.
As a result, marketers who consistently associate their brand names with favorable descriptors across the web can subtly influence the model’s perception of their credibility.
This doesn’t require hacking or coding expertise. It can be achieved by publishing “best of” articles, posting to high-visibility forums, or embedding brand-tagged phrases in publicly available bios and event listings.
Fishkin himself has acknowledged leveraging legitimate forms of this approach: he carefully phrases his bio (“makers of fine audience research software”) so that it appears consistently across platforms, reinforcing SparkToro’s association with audience intelligence tools.
That practice, he argues, is transparent and fair. But when false data or deceptive tactics are introduced, the ethical line blurs fast.
How AI’s “Trust” Can Be Exploited
Traditional search engines evolved over two decades of spam wars — learning to detect link farms, content stuffing, and fake freshness. LLMs, on the other hand, are still learning to distinguish quality from manipulation.
Their training data and retrieval mechanisms often lack the same kind of anti-spam systems that power Google’s ranking algorithms.
So when a piece of content claims to be “recent,” or is heavily discussed on Reddit or YouTube, the model treats that as a strong credibility signal, even if it’s artificially boosted.
This is why some SEO agencies are already experimenting with publishing large numbers of “best X software” articles, cross-posted across low-quality websites and Substacks, to sway LLM outputs.
In the short term, it works. In the long term, it risks polluting AI-generated information ecosystems.
Industry Reactions: “A Wake-Up Call”
Experts across digital marketing and AI research view these findings as a wake-up call for both industries.
“AI models are becoming new gatekeepers of visibility,” says Chris Long, an SEO researcher who ran related experiments on publication dates. “If those gatekeepers can be fooled this easily, we have a serious transparency problem.”
The issue isn’t limited to commercial visibility. Public information, from medical advice to political analysis, can also be affected by manipulative patterns that amplify certain narratives.
AI ethicists argue that algorithmic trust must be earned, not gamed. They urge model developers to improve content validation, and for users to maintain critical awareness when reading AI-generated answers.
Fishkin, while condemning spam tactics, remains pragmatic: “These systems reflect the incentives we build into them. If AI tools prioritize recency and repetition, people will optimize for that. It’s human nature.”
The Economic Stakes
The ability to appear in AI-generated answers is no small matter. As more users turn to chat-based search experiences, traditional web traffic from Google is declining. Brands that manage to surface in AI responses can capture audience attention before a single click happens.
In this new environment, visibility equals survival. Companies are already investing in AI answer optimization, a term rapidly replacing “SEO” in industry conversations.
But unlike traditional search optimization, the metrics here are opaque. There are no click-through rates, no backlinks to measure, and no clear path to audit bias. Instead, marketers are working in a gray area defined by experimentation, speculation, and limited data access.
Fishkin’s research provides rare empirical insight into that process, one that’s likely to grow in significance as AI-driven search expands.
The Human Cost of Manipulation
Beyond the marketing arms race, the CodeSmith story features a deeper consequence: real-world harm.
When misinformation or malicious campaigns spread unchecked through online platforms, and AI tools replicate those biases, the fallout isn’t abstract. Companies lose livelihoods. Workers lose jobs. Users lose trust in digital information.
The ease with which such manipulation can occur suggests that AI models have inherited the web’s vulnerabilities without its checks and balances.
Unless addressed, the next generation of online discovery could become an even more fragile mirror of public opinion.
What Can Be Done
Experts suggest several immediate steps to mitigate the risks:
- For AI developers: Strengthen validation filters that distinguish genuine recency from artificial date manipulation.
For marketers: Prioritize authenticity. Use consistent, factual brand descriptions instead of mass content production. - For users: Cross-verify AI-generated recommendations, especially in high-stakes decisions such as education, finance, or healthcare.
- For regulators: Encourage transparency in how AI tools select and present commercial recommendations.
Actionable Insights for Readers
Here’s how individuals and brands can act responsibly while improving their AI visibility:
- Use accurate metadata. Keep publication dates and author details honest. AI tools reward clarity.
- Be present where credibility lives. Reddit and YouTube remain key influence sources, but engage genuinely, not through manipulation.
- Maintain content freshness naturally. Update articles with meaningful new information, not cosmetic date changes.
- Audit your brand mentions. Track how your company is described across the web and correct misinformation promptly.
- Educate your team. Train marketing staff to recognize the ethical and reputational risks of AI manipulation.
Key Takeaways
- Large language models favor brands with frequent and recent online mentions.
- Reddit and YouTube significantly influence AI-generated recommendations.
- Falsified “fresh” content can boost visibility but undermines credibility.
- Ethical concerns are rising as manipulation affects real businesses.
- Transparency and authenticity remain the most sustainable long-term strategy.
About the author
Share this article
Find out WHAT stops Google from ranking your website
We’ll have our SEO specialists analyze your website—and tell you what could be slowing down your organic growth.
