Classic SEO was largely a winner take all affair. One brand ranked first, another ranked tenth, or worse, was buried on pages 2+ of Google… and the gap between them defined who got the click and subsequent sale.
That model is rapidly eroding. What has replaced it is fundamentally different — and most brands have not caught up yet.
Large language models do not rank. They reach consensus. By consensus, I mean the process by which an LLM triangulates agreement across independent sources to determine what is credible. When a user asks ChatGPT “is this brand worth it?” or “what’s the best option for X?”, the model pulls from 30 to 40 different sources and synthesizes an answer. The brand with the most presence across those sources shapes the answer. The brand absent from those sources often gets ignored, regardless of what their own website says (an important, but diminishing source of truth).
This is what has shifted, and what it means for brands trying to build durable visibility in AI-driven search.
Who am I? I am a media operator and recovering SEO, with 12+ years on the frontlines of search. I currently run ReddVisible and ScaleVisible, an organic Reddit marketing agency and an AI visibility agency, focusing on third party context execution.
Why This Shift Matters: A Note on Sources
Most brands discovered traditional SEO through clean, trackable metrics. AI visibility is murkier. The statistics cited in this piece come from a mix of market research firms, published academic work, and industry studies — some more rigorously sourced than others. Where specific figures are contested or evolving, I’ve noted it. The directional picture is consistent even where exact numbers vary.
From Rankings to Consensus: A Fundamental Shift
In traditional search, the top three positions captured roughly 60% of all clicks. SEO was a zero-sum game. You fought for position one, and position eight was essentially invisible.
In an LLM environment, that math collapses. ChatGPT has captured the dominant share of the AI chatbot market, with some estimates placing it above 80% of AI assistant usage (Statista, 2024). Those users rarely click through to websites: research from SparkToro and Datos suggests that the vast majority of AI search sessions end without a website visit — a dynamic Rand Fishkin has described as the natural endpoint of a decade-long zero-click trend that Google itself accelerated. And yet, brands cited in those sessions are influencing purchase decisions at conversion rates that appear to substantially outperform traditional search clicks, according to early ecommerce attribution studies — though this remains an area where measurement is genuinely difficult and the numbers vary widely by study.
The implication is direct. Visibility in AI is not about owning one position. It is about being present across enough third-party sources that the model consistently includes you when constructing its answer.
Brands with simultaneous presence on Wikipedia, Reddit, and review platforms show meaningfully higher rates of being cited by both ChatGPT and Perplexity than brands concentrated on a single channel (BrightEdge, 2025). The signal that matters is cross-platform consistency, not ranking position.
The Three Principles of Building Consensus in LLMs
1. Third-Party Sources Carry More Weight Than Owned Content
LLMs are trained to be skeptical of what brands say about themselves. They weight third-party context heavily. Studies of AI-generated brand responses suggest that the large majority of brand mentions come from third-party pages rather than owned domains — a figure that several practitioners I’ve spoken with put even higher in practice.
This is a hard adjustment for most marketing teams. The instinct is to optimize the website, update the blog, polish the product pages. Those are still worth doing, but they are table stakes. The real leverage is in what others say about you.
Reddit is the clearest example. Multiple citation analyses have identified Reddit as one of the most frequently cited domains in ChatGPT responses. These are real conversations, not brand content. When someone asks “is this company legit?” the answer is being shaped by Reddit threads, not press releases.
“We were doing everything right on our own site,” one DTC brand founder told me. “Then we realized we had almost no footprint anywhere else. From the model’s perspective, we basically didn’t exist.”
The brands that understand this are proactively building presence in third-party spaces — creating the context that AI models draw from when constructing answers about them.
2. Consensus Requires Breadth Across Channels
A single citation is not consensus. LLMs establish credibility by finding alignment across multiple independent sources. A brand mentioned positively in a Reddit thread, a niche editorial review, and a YouTube walkthrough is treated as more authoritative than a brand with an excellent website and nothing else.
This is a meaningful shift in how content strategy needs to work. The old model was to dominate one channel. The new model is to build a presence across several simultaneously, each reinforcing the others.
Research on content freshness suggests that recently updated material earns meaningfully more citations than older content — a signal that aligns with how LLMs are trained to weigh recency. A multi-channel approach where Reddit, YouTube, and editorial content work together creates what one client conversation described as a “moat.” Each channel protects the others from the performance fluctuations any single platform will experience.
The practical framing: if your brand is only visible on your own domain, you are operating as a ghost in the LLM world. Building from ghost to challenger to authority requires presence across channels the model actually trusts.
3. The Quality of Source Matters More Than Volume
Not all third-party citations are created equal. LLMs favor content that is structured to answer questions, dense with specific information, and consistently aligned with what other trusted sources say.
There is a temptation to flood AI models with volume — using AI-generated content at scale to grab citation share quickly. That approach has a shelf life. AI companies are actively developing spam detection, and models are already showing a preference for content that provides genuine context rather than keyword-stuffed articles designed to game the system. Ethan Mollick has written extensively on how LLMs evaluate source quality, and the consistent finding is that human-grade specificity and evidential density outperform volume at scale.
The more durable strategy is to create content that is genuinely useful and built for the way LLMs consume information. That means answering specific user questions, including data points and citations, and updating content regularly. Analysis of high-visibility AI citations consistently shows that pages with statistics, quotes, and source references outperform thinner content — which is one reason this article is trying to practice what it preaches.
What Makes This Different From Traditional SEO
The consensus model is similar enough to SEO to fool many practitioners, and different enough to cause real problems if they treat it the same way.
Traditional SEO rewarded technical optimization, backlinks, and keyword density. LLM visibility rewards breadth, freshness, and third-party corroboration. A brand can have perfect on-site SEO and be nearly invisible in AI responses because the model finds no external context to corroborate what the brand says about itself.
The measurement challenge compounds this. Traditional SEO gave you position tracking, click-through rates, and deterministic attribution. AI visibility does not. There is no analytics dashboard from ChatGPT. Brands have to piece together share of voice across platforms, AI referral traffic in their own analytics, and directional signals from post-purchase surveys. It is less clean, but the signal is there for brands willing to look for it.
“We’re in the fog of war right now,” one brand strategist in the fintech space told me. “If you have a little more knowledge than your competitor, that’s an advantage. We just don’t have the same luxury we used to.”
That captures the current moment accurately. The brands building measurement infrastructure now will have meaningful advantages as the data ecosystem matures.
Why Third-Party Source Quality Is Everyone’s Problem
There is a broader challenge underneath all of this, worth naming directly — and it connects back to the quality principle above.
AI models are trained on the open web. The open web is maintained by publishers, journalists, independent creators, and communities like Reddit. As AI models answer more questions directly, fewer users click through to those sources. Revenue for publishers drops. Quality content becomes harder to sustain. The Reuters Institute’s Digital News Report has tracked publisher revenue declines in detail, and the trend predates AI but is sharply accelerated by it. The models, over time, risk training on lower quality data — a feedback loop nobody wants but everyone is contributing to.
This is the tragedy of the commons playing out in real time. Nobody is explicitly in charge of maintaining the quality of the sources that AI models depend on.
New models are emerging to address this: pay-per-citation structures, brand licensing partnerships, direct compensation from brands to publishers for AI-readable context. None are fully formed yet. But the brands paying attention to this now are positioning themselves well for what comes next — both in terms of visibility and in terms of the partnerships that will define the space.
This matters for the quality principle, too. Brands that actively support the ecosystem they benefit from, by commissioning original editorial coverage, funding community platforms, and partnering with independent creators, are not just doing good. They are building the kind of third-party presence that AI models will keep citing for years.
My Key Takeaways
- LLMs use consensus, not rankings. Presence across multiple trusted third-party sources shapes AI-generated answers.
- Most brand mentions in AI responses come from third-party pages. Owned content is table stakes, not the strategy.
- Reddit, YouTube and Blogs dominate AI citations. Brands without meaningful presence on community platforms and “tier 2” media are largely invisible to LLMs.
- Multi-channel presence builds a moat. Each channel reinforces the others and reduces dependence on any single platform.
- Quality beats volume. Human-grade, question-answering content with real citations outperforms AI-generated content at scale over any meaningful time horizon.
- Measurement is still early. Directional signals from visibility tracking, referral traffic, and customer surveys are today’s proxy for the clean attribution data that does not yet exist.
- The ecosystem matters. Brands that support the publishers and communities they benefit from are making a strategic investment, not just a moral one.
The Final Drop
The brands that win in AI search will not necessarily be the ones with the best websites. They will be the ones that understood, early, that LLMs operate on consensus rather than rankings — and built their presence accordingly.
The opportunity is real. A challenger brand today can achieve parity with a dominant incumbent in AI-generated results because incumbents are slow to adapt. I see this all of the time with our clients. That window will not stay open indefinitely.
Which of these principles is hardest to act on inside your organization? The gap between understanding the consensus model and actually executing against it is where most brands are stuck right now. I’d genuinely like to hear where you’re running into friction — drop it in the comments.
Key sources referenced: SparkToro / Datos zero-click research; BrightEdge generative AI visibility study (2025)