Duane Forrester Decodes

Duane Forrester Decodes

AI is rewriting how brands get found, cited, and trusted. Duane Forrester breaks down what that means for practitioners and marketing leaders who can't afford to get it wrong. duaneforresterdecodes.substack.com

Episodios

  1. Search Engines Were Built to Send Traffic Out. LLMs Were Built to Answer You, keep you in.

    HACE 2 D

    Search Engines Were Built to Send Traffic Out. LLMs Were Built to Answer You, keep you in.

    Search engines were designed to do several things at once: rank a field of options, route the user to one of them, and keep the human inside the decision so the engine never owned the choice. That last part was not an accident. It was the liability architecture. LLMs were built without any of it. They were built to answer the question directly, which is a different job entirely, and the design choices that follow from it change what visibility looks like, what risk looks like, and what the word ROI can honestly mean when the thing sending you traffic was never built to send traffic in the first place. Two Systems, Two Jobs A search engine’s job description is long. It crawls the web, indexes it, ranks a pool of candidate results against a query, presents them as a ranked list, and then waits for the human to make a click decision. The SERP itself has been drifting toward retention for years now, with galleries, rich snippets, answer boxes, local maps, video carousels, and AI Overviews all layering in features that keep the user on the page longer and route fewer of them to third-party sites. But the underlying contract was always the same. The engine offers options. The user selects one. The user owns the choice. An LLM does not offer options. It produces an answer. The citation, when it appears, is not functioning as a routing instrument. It is closer to a grounding artifact produced by a retrieval pipeline, or in some framings a confidence hedge, or both at the same time. Whichever read you prefer, none of them describe a system designed to send traffic somewhere else. The system was designed to resolve the question in place. That distinction sits beneath every metric conversation in this space. When practitioners ask what the LLM referral rate is, what the attributed traffic number looks like, what the click-through from an AI answer is, they are asking questions that assume a routing mechanism that is not actually part of the architecture. Whatever traffic does come through is a byproduct, not a design goal, and confusing the two is the first mistake in almost every conversation about AI visibility ROI. The Liability Surface Moved The human in the click decision was the SERP’s shield. If the link the user selected led somewhere harmful, misleading, or defamatory, the engine could point to the list of options and the user’s own agency in choosing one. The engine had not published the claim. It had surfaced ten candidate sources, the user had chosen one, and whatever happened next was not the engine’s editorial output. That is not a small feature. That is the reason Section 230 protections were structured the way they were, and why algorithmic ranking has traditionally been treated as something other than direct speech. LLMs have no equivalent shield to stand behind. The system is producing the answer directly, in its own voice, without a field of options or a user-selected source. The liability surface that the SERP was designed to offload sits with the model producing the output, and the cases that have already moved through courts are starting to sketch the edges of that surface. Walters v. OpenAI was dismissed on summary judgment in May 2025, and the decision leaned heavily on OpenAI’s disclaimers and a sophisticated reader who reasonably knew the chatbot could hallucinate. That reading protects general-purpose consumer chatbots in a very specific kind of case. It does not protect every product that uses a language model. In a separate matter, Air Canada was held liable for its customer service chatbot’s false statements about its own bereavement fare policy, because a customer could reasonably rely on an airline’s branded support agent for accurate information about that airline’s policies. Reasonable reliance is the key legal term, and the more specialized and authoritative the chatbot appears, the harder the disclaimer defense becomes to run. The active litigation is still mapping the frontier. OpenAI is currently facing multiple lawsuits tied to allegations that ChatGPT drove users toward suicide or harmful delusions, several involving minors. The New York Times copyright case against OpenAI was allowed to proceed by a federal judge in March 2025, and Anthropic settled with book authors in August 2025 for a reported sum well into the billions. European GDPR complaints continue to move through Noyb. Battle v. Microsoft is still live. None of these outcomes are settled, and some will be dismissed on the same disclaimer grounds that resolved Walters. The point is not that LLM operators will lose every case. The point is that the liability surface now sits with the system producing the output, whether the individual plaintiff wins or loses, and every brand building against an LLM inherits some version of that surface when it uses the system’s output in its own customer-facing work. The Denominator Problem The most common argument against investing in AI visibility work sounds decisive until you look closely at what it is measuring. The argument runs roughly: ChatGPT and the others send a tiny sliver of referral traffic, somewhere in the low single digits of total inbound, so why reallocate budget toward a channel that barely moves the needle? Conductor’s research pegs the combined AI referral share at about 1% of publisher traffic. That number is real. At first read, it seems to close the ROI question cleanly. It closes nothing. The problem is the denominator. While the AI share of traffic holds roughly steady, the absolute volume of search-driven traffic has collapsed across most publisher categories. Similarweb data shows organic traffic to news publishers fell from about 2.3 billion monthly visits in mid-2024 to under 1.7 billion by May 2025, a loss of more than 600 million visits in under a year. Business Insider’s search traffic dropped 55% between April 2022 and April 2025, HuffPost lost roughly half of its search referrals, and The New York Times saw search’s share of its desktop and mobile traffic slide from 44% to 37%. Zero-click searches climbed from 56% to 69% between May 2024 and May 2025 as AI Overviews expanded across the SERP. A Reuters Institute survey of 280 media leaders in late 2025 found they expect another 43% decline on average over the next three years. Read against that backdrop, a stable percentage share of a shrinking pie is not stable. It is a loss. The skeptics who point at the 1% number are measuring relative share of a traffic base that is contracting underneath them, and they are treating a falling absolute as if it were a steady state. The real question is not whether LLMs are sending meaningful traffic yet. The real question is whether the channel that used to send meaningful traffic is still doing what it used to do, and the answer is visibly no. The denominator is moving, and any ROI calculation anchored to the old denominator is a calculation of the previous environment, not the current one. What The Billions Say If the design-intent and liability and denominator arguments still leave room for doubt, the last place to look is revealed preference. What are the companies with the most complete internal data on user behavior actually doing with their capital? The answer is unambiguous. The five largest US cloud and AI infrastructure providers have committed between 660 and 690 billion dollars in 2026 capital expenditure, nearly doubling 2025 levels. Alphabet alone is guiding to between 175 and 185 billion for 2026, more than doubling its 2025 spend of 91 billion. Microsoft, Amazon, Meta, and Oracle are all running similarly aggressive curves. The number that matters most, and that defuses the usual counter-argument, comes from Bank of America credit strategists who estimate AI capex will reach 94% of operating cash flows in 2025 and 2026, up from 76% in 2024. That is not the shape of a defensive hedge. A hedge is a fraction of the cash flow, deployed to avoid being caught flat-footed if a competitor’s bet pays off. Companies do not put 94% of operating cash flow into a category for two consecutive years unless the leadership genuinely believes the category is the business. And those leadership teams have access to data that the rest of us do not. They can see inside their own products, their own user behavior shifts, their own cohort analyses, their own enterprise pipeline conversations. They are legally bound to deploy shareholder capital in a way that reflects what they actually see, and what they are deploying it toward is the architecture that produces direct answers rather than ranked lists of options. To believe search-as-we-knew-it remains the gold standard, you have to believe that dozens of CEOs, boards, and senior leadership teams with decades of internal-only data are reading their own numbers wrong, while an external industry with none of that data is reading the market correctly. That does not pencil. The human-behavior side of the equation makes the same point in a different register. Every labor-saving technology that has ever been introduced has reshaped the status quo faster than its skeptics predicted, because cognitive efficiency is not a preference. It is a survival behavior, wired in through long periods when calories were scarce and shortcuts mattered. When a new tool appears that makes some task meaningfully easier, adoption is not a matter of whether. It is a matter of how fast and along what curve. ChatGPT is now at roughly 900 million weekly active users, up from 200 million eighteen months earlier, and the full category is past a billion active users across platforms. The behavior has already shifted. The money has already shifted. The only thing that has not fully shifted is the measurement frame most practitioners are still using to evaluate the channel. Which brings the question back to the one that is actually worth asking. What do you do if there is no ROI by the old definition, and you still cannot ignore the

    16 min
  2. AI Gives You the Vocabulary. It Doesn’t Give You the Expertise.

    26 ABR

    AI Gives You the Vocabulary. It Doesn’t Give You the Expertise.

    Hiring managers are watching something uncomfortable happen in interview rooms right now. Candidates arrive with the right credentials, the right vocabulary, the right tool stack on their résumés, and then someone asks them to reason through a problem out loud, and the room goes quiet in the wrong way. Not in the thoughtful kind of way but the empty kind that tells you the person across the table has never actually had to think through a hard problem on their own. And research is converging on the same conclusion. Microsoft, the Swiss Business School, and TestGorilla have all documented the same pattern independently: heavy AI reliance correlates directly with declining critical thinking, and the effect is strongest in younger, less experienced practitioners. This isn’t a technology story so much as a cognition story, and the SEO industry is living a version of it in slow motion. What none of those studies name is the specific mechanism: the three-layer architecture of expertise where AI commands the retrieval layer completely, and the judgment layers underneath it are more exposed than they've ever been. That architecture is what this piece is about. The debate is framed on the wrong axis Every conversation about AI and critical thinking eventually lands in the same place: humans versus machines, organic thinking versus generated output, authentic expertise versus artificial fluency. It’s a compelling frame and also the wrong one. The real fracture line isn’t human versus AI. It’s retrieval versus judgment, and those are not the same cognitive act, even though AI has made them feel interchangeable in ways that should concern anyone serious about their craft. Retrieval is access. It’s the ability to surface relevant information, synthesize patterns across a body of knowledge, and produce fluent output that maps to the shape of expertise. Large language models are extraordinary at this, genuinely and structurally superior to any individual human at the retrieval layer, and getting better at speed. Fighting that reality is not a strategy. Judgment, however, is different. Judgment is knowing which question is actually the right question given this specific context, the ability to recognize when something that looks correct is wrong for this situation in ways that aren’t in any training data, the accumulated weight of having been wrong in consequential situations, learning why, and recalibrating. You cannot retrieve your way to judgment. You build it through deliberate practice under real conditions, over time, with skin in the game that a model structurally cannot have. The problem isn’t that AI handles retrieval well. The problem is that retrieval output now sounds so much like judgment output that the gap between them has become nearly invisible, especially to people who haven’t yet built enough judgment to know the difference. The Judgment Stack Think about expertise as a stack, not a spectrum. Layer 1 is retrieval - synthesis, pattern vocabulary, volume processing, surface recognition. This is AI territory, and handing work in this area over to an AI is not weakness but correct resource allocation. The practitioner who uses an LLM to compress a competitive analysis that would have taken three hours into forty minutes isn’t cutting corners, they’re buying back time to do the work that actually compounds. Layer 2 is the interface layer - hypothesis formation, question quality, contextual filtering, knowing which output to trust and which to interrogate. This is where the leverage actually lives, and it’s fundamentally human-plus-AI territory. Your prompt quality is a direct proxy for your judgment quality. Two practitioners can feed the same LLM the same general problem and get outputs that are miles apart in usefulness, because one of them knows what a good answer looks like before they ask the question, and that foreknowledge doesn’t come from the model but from Layer 3 working backward. Layer 3 is consequence and context - the ability to recognize when a pattern that has always worked is about to break, to assess novel situations that don’t map cleanly to anything in the training data, to hold strategic framing steady under pressure when the data is ambiguous. This is human territory, not because AI couldn’t theoretically develop something like it, but because it requires something a deployed model structurally cannot have: skin in the game, real consequence, the accumulated scar tissue of being wrong when it mattered and having to carry that forward. The critical thinking crisis everyone is diagnosing right now is not, at its root, an AI problem but a Layer 2 collapse. People skip directly from Layer 1 retrieval to Layer 3 claims, bypassing the judgment infrastructure entirely. Layer 1 output is fluent, confident, and often correct enough to pass casual scrutiny, which keeps the gap invisible right up until someone asks a follow-up the model didn’t anticipate, and the person has no independent footing to stand on. What SEO is actually revealing SEO is a useful diagnostic here because the industry has always been an early signal for how the broader marketing world processes technological disruption. We were the first to chase algorithmic shortcuts at scale. We were the first to industrialize content in ways that traded quality for volume. And right now we are watching two distinct practitioner populations diverge in real time, with the gap between them widening faster than most people have noticed. The first population is using LLMs as answer machines: feed the problem in, take the output out, ship it. Ask the model what’s wrong with a site’s rankings. Ask it to write the content strategy. Ask it to explain why traffic dropped. This isn’t entirely without value, since Layer 1 retrieval has genuine utility even here, but the practitioners operating purely at this layer are making a trade they may not fully understand yet. They are outsourcing the only part of the job that compounds in value over time. Every hard problem they hand off to a model without first attempting to reason through it themselves is a training repetition they didn’t take, a weight they didn’t lift, and those repetitions are how Layer 3 gets built. You want the muscle? You have to do the work. The second population is using LLMs as reasoning partners. They come to the model with a hypothesis already formed, a question already sharpened by their own thinking, and they use the output to pressure-test their reasoning, surface considerations they may have missed, and accelerate the parts of the work that don’t require their hard-won judgment, which frees them to apply that judgment more deliberately where it matters. These practitioners are getting faster and better simultaneously, because the model is amplifying something that already exists. The difference between these two groups has nothing to do with tool access, since they are using the same tools, and everything to do with what each practitioner brings to the model before they open it. The leveling lie The argument for AI as a leveling tool is not wrong, it’s just incomplete, and that incompleteness is where the damage happens. A junior practitioner today has access to a compression of the field’s knowledge that would have been unimaginable five years ago. Ask an LLM about crawl budget allocation, entity relationships, structured data implementation, or the mechanics of how retrieval-augmented systems weight freshness signals, and you will get a coherent, usually accurate answer in seconds. That is a genuine democratization of Layer 1, and dismissing it as illusory is its own form of gatekeeping. But Layer 1 access is not expertise. It is the vocabulary of expertise, and there is a specific kind of danger in having the vocabulary before you have the understanding, because fluency masks the gap. You can discuss the concepts. You can deploy the terminology correctly. You can produce output that looks like the work of someone with deep experience, and you can do all of that while having no independent capacity to evaluate whether what you just produced is actually right for the situation in front of you. This is not a character flaw but a metacognitive failure, the condition of not knowing what you don’t yet know. The junior practitioner using an LLM to accelerate their access to field knowledge isn’t being lazy. In many cases they are working hard and genuinely trying to develop. The problem is that Layer 1 fluency generates a confidence signal that isn’t calibrated to actual capability. The model doesn’t tell you when you’ve hit the edge of what it knows. It doesn’t flag the situations where the standard answer breaks down. It doesn’t know what it doesn’t know either, and neither do you yet, and that combination is where well-intentioned work quietly goes wrong. The leveling effect is real, but the ceiling on it is lower than most people assume. What gets leveled is access to the knowledge layer. What doesn’t get leveled (what cannot be compressed or transferred through any tool) is the judgment architecture that determines what you do with that knowledge when the situation doesn’t follow the pattern. The practitioners who understand this distinction will use AI to accelerate their development. The ones who don’t will use it to feel further along than they are, right up until the moment a genuinely novel problem requires something they haven’t built yet. Where the abdication actually happens Let’s be precise about this, because the accusation of abdication usually gets thrown around in ways that are more emotional than useful. Using AI at Layer 1 is not abdication. Letting a model handle competitive analysis synthesis, first-draft content frameworks, technical audit pattern recognition, or structured data generation is correct delegation, since these are retrievable tasks and doing them manually when a better t

    19 min

Información

AI is rewriting how brands get found, cited, and trusted. Duane Forrester breaks down what that means for practitioners and marketing leaders who can't afford to get it wrong. duaneforresterdecodes.substack.com