A controlled test of 1,000 queries across five competitive sectors found that brands on Google’s first page appeared in ChatGPT answers only about 62% of the time. The difference was uneven across industries and nearly unchanged when the model was allowed to browse the web.
A research team led by Pavel Israelsky ran a head-to-head comparison to test whether strong Google rankings actually translate into a brand being mentioned by AI.
The dataset covered 1,000 matched queries in the U.S., across car insurance, credit cards, hotel booking, online courses, and web hosting.
Researchers used GPT-5 and issued each prompt twice, with web access on and off, to measure whether brands that sit on Google’s first page show up in ChatGPT’s synthesized responses.
The headline result is important for anyone who relies on SEO to drive awareness: appearing on page one of Google does not guarantee a spot in an AI answer.

A Closer Look At What the Numbers Show
The researchers wanted to know if brands on Google’s first page would also appear in ChatGPT results, and whether their positions lined up across both platforms.
Out of 1,000 queries, brands overlapped just 62% of the time when browsing was on and 61% when it was off.
In nearly 4 out of 10 cases, Google and ChatGPT disagreed on which brands mattered for the same question.
The overlap was uneven across brands. Coursera led all companies with an 86 to 87% match between its Google rankings and ChatGPT mentions.
GoDaddy followed at 83%. Hostinger sat at the bottom with just 32 to 34% overlap. EdX performed poorly as well, with 47 to 48%.
When results are grouped by category, the pattern holds. Online courses had the strongest alignment at 65% on average.
Hotel booking performed worst at 58%. Car insurance stood at 60%, credit cards at 61%, and web hosting at 63%.
Across the entire dataset, the overall overlap averaged between 61 and 62%.
Position correlation gave an even stronger signal. When a brand appeared in both places, its Google position rarely predicted its place in ChatGPT’s answer list.
Most correlation coefficients were close to zero. With browsing enabled, the rank-to-mention correlation was approximately 0.034, and with browsing off, it was about 0.022.
A few brands had modest positive readings, but others showed negative values, which means a higher Google rank sometimes corresponded to a later mention in ChatGPT.
That suggests the two systems assess “authority” differently. One relies on links, content signals and ranking algorithms tied to URLs. The other synthesizes answers based on language patterns and model-internal associations informed by training data and reasoning processes.
How the Test Was Run
The researchers picked five high-traffic, high-competition categories where brands invest heavily in search optimization. They then used Ahrefs and Semrush to identify the top three domains per vertical by non-branded organic traffic. That limited the test to brands that already had strong SEO programs in place.
Next, they assembled a 1,000-query dataset by selecting 200 overlapping keywords per category and converting each keyword into a natural-language prompt that reflected how people ask questions in chat.
To simulate real users, they ran each prompt through persona-based profiles and a U.S.-based proxy.
For each prompt, they recorded both whether a brand was mentioned in ChatGPT’s answer and the order in which brands appeared. They repeated every test with browsing on and off and standardized ChatGPT output formatting to allow position-to-position comparison. The full experimental design and results are in the study.
Two details are worth flagging.
First, the team filtered out branded SERP signals to focus on generic, competitive queries. That choice prevents brand-specific campaigns from skewing the results.
Second, they used GPT-5 as the test model. OpenAI introduced GPT-5 as the new flagship model recently, and it is currently the default in ChatGPT for logged-in users. That matters because model architecture and training sets shape what the agent knows and what it chooses to say.
What Probably Drives the Gap Between Search and Chat Mentions
The study cannot prove exactly why the model picks some names and ignores others, but the experimental setup and the public facts about how these systems are trained point to likely causes.
Training data coverage matters. Brands that appear frequently in explanatory, consensus-building contexts will register more strongly inside a language model. Simple, consistent naming across public pages helps the model map a product or company to a prompt. The phrasing of the prompt and the persona behind it matter as well.
Finally, models synthesize answers from patterns and probabilities rather than ranking URLs by links. Those differences mean that link-based authority and language-based association are separate signals. The research highlights this difference without claiming to have identified a single cause.
A Surprising Detail About Browsing Mode
Letting the model fetch the web in real time barely moved the needle. Allowing ChatGPT to browse increased overlap by roughly one percentage point. That finding matters because it shows that what the model already “knows” and how it draws on semantic associations matters far more, at least in this dataset, than instantaneous crawl coverage.
For marketers, this means fresh crawling and index status on the open web are necessary but not sufficient to win mention in an AI answer.
Where This Study Is Solid, and Where to Apply Caution
This research is careful in its setup and frank about limits. It looks at five categories, 15 brands, and 1,000 prompts. That gives useful, actionable insight for competitive U.S. markets but it is not a universal law.
The persona-based prompting is clever because real users are not anonymous keyword blobs. At the same time actual users have infinite variation, and models update constantly.
The researchers tested both GPT-3 and GPT-5 and found similar results, which strengthens their claim that the patterns are stable across recent model versions. Still, expanding the categories, adding international targets, or testing other model families could change the outcomes.
Practical Moves You Can Make Today
If you read the results and want to act, here are practical steps that adapt classic SEO to AI mention goals.
- Make your brand easy for models to identify. Use consistent naming across pages and structured data that emits the canonical brand name. That improves the chance an automated synthesis process will map your brand to a query.
- Produce high-quality explanatory content that shows your offerings in context. AI answers often favor sources that explain rather than only list features. Aim for clear, authoritative pages that describe who the product is for, how it works, and what typical users should expect.
- Claim and optimize your knowledge panels and public profiles. While AI synthesis does not exactly cite URLs, it leans on the same public facts that feed knowledge graphs. Keep public listings accurate and current.
- Test prompts to see how conversational agents talk about your space. Running a sample set of persona-based prompts can reveal how often your brand appears and what language tends to trigger mentions. Use that insight to tune on-page wording and content headings.
- Monitor both SERPs and AI mentions. Treat Google rankings and AI visibility as distinct but related KPIs. Tracking both will help you find where efforts overlap and where you need dedicated work.
What Success Might Look Like Soon
If you treat AI mention rate as its own remit, you will find practical wins.
Better structured content, consistent naming, and clear explanatory resources will increase the chances that a model will pick your brand as an example. Those fixes also help search, so the work has cross-platform benefit.
Expect early wins to come from low-effort changes and pilot content that answers common decision queries.
The study’s data suggest changes to on-page language can move the needle more quickly than link building when your goal is to be named in an AI response.
Limits and Ethical Questions Worth Watching
The findings are valuable, but they are not carved in stone. A model update tomorrow could shift which brands appear and which vanish. I have seen this myself while testing prompts over time. The same question can suddenly produce a different answer after an update, even if nothing changed on the brand’s side. That volatility makes it risky to treat AI visibility as something you can fully control.
There’s also the issue of fairness and transparency.
If a conversational model decides to show just one synthesized answer instead of a list of links, users lose the ability to check the evidence behind it. Companies lose exposure, and audiences lose the trail back to original sources.
From my perspective, that is not a small concern. We all need to know why a brand gets mentioned or ignored, and right now the reasoning is mostly hidden inside the model.
The researchers themselves acknowledge this. They present the data as a snapshot of September 2025, not a prediction of what will happen next year. That honesty matters because it forces us to see the limits: the study shows what the model did during this test, not what it will always do.
Key Takeaways
- Across the 1,000-query test, the match between Google page-one brands and ChatGPT mentions averaged about 61–62%.
- Brand overlap varied widely. Coursera and GoDaddy showed the highest match rates. Hostinger and edX were among the lowest.
- Rank position on Google is a poor predictor of where a brand appears in an AI answer. Correlation numbers hovered near zero.
- Allowing web browsing in ChatGPT moved the match by about one percentage point. The model’s internal knowledge dominated.
- Treat AI mention rate as its own KPI and run small prompt experiments to learn what language triggers your brand.
Dileep Thekkethil
AuthorDileep Thekkethil is the Director of Marketing at Stan Ventures and an SEMRush certified SEO expert. With over a decade of experience in digital marketing, Dileep has played a pivotal role in helping global brands and agencies enhance their online visibility. His work has been featured in leading industry platforms such as MarketingProfs, Search Engine Roundtable, and CMSWire, and his expert insights have been cited in Google Videos. Known for turning complex SEO strategies into actionable solutions, Dileep continues to be a trusted authority in the SEO community, sharing knowledge that drives meaningful results.