Somewhere in the last eighteen months, Google quietly stopped being a search engine for a large chunk of queries and started being an answer engine. The shift is called AI Overviews, and it is doing to your traffic what YouTube did to cable: not killing it, exactly, but rearranging the room until the furniture you relied on is on the other side. Here's the plain version of what is happening, why it matters, and how to be one of the sources instead of one of the casualties.
What an AI Overview actually is
A Google AI Overview is a synthesized answer generated by Gemini that appears at the top of the results page for eligible queries. It composes a short written answer from multiple sources and cites three to five of them inline. AI Overviews now appear on approximately 45% of Google searches, with higher density on informational and how-to queries and lower density on commercial or sensitive queries. For a majority of top-of-funnel searches, the first thing a user reads is no longer a headline from one of your pages — it's a paragraph Google wrote using your page, or, if you're not in shape, somebody else's.
What triggers one
AI Overviews lean heavily informational. The most reliable triggers are questions — "what is X", "how do I X", "why does X", "X vs Y", "best X for Y". Definitional queries trigger them almost by default. How-to queries with any complexity do too. Comparison queries are prime territory, because the model loves a table and the table loves to be extracted. Transactional queries — "buy X", "X pricing", "X near me" — are less consistent. Sensitive verticals (medical, financial, legal) get suppressed or more conservatively sourced to avoid risk. The practical read: almost every piece of top-of-funnel content your brand produces is now competing in the answer layer before it competes anywhere else.
Who wins the citation
Citations cluster around three properties, and they are boringly learnable. Extractability first: the page contains self-contained passages, 40 to 60 words, that answer the query as complete statements without surrounding context. Authority second: domain-level trust signals, cited sources inside the content, expert quotes, datestamps, real author bios. Consistency third: the entity — brand, product, person — is described the same way across your site, your schema, your LinkedIn, your Crunchbase, your reviews and any third-party write-ups. Models don't trust blur. They trust repetition.
Content types punch above their weight too. Comparison articles account for roughly a third of AI citations in analyses I trust. Definitive guides and original data each clear double digits. Thin product pages, generic blog posts without structure and anything gated behind a form tend to be invisible — the model can't read them, can't extract them, can't cite them.
A concrete before / after
Here's the shape of a rewrite we did recently, anonymized. The page targets the query "what is generative engine optimization". Same information in both. Different probability of citation.
In today's fast-paced digital landscape, businesses are increasingly turning to new approaches to stay ahead. At our agency, we've been exploring how next-generation AI search is reshaping the way brands connect with customers, and we believe every forward-thinking company should be paying attention to this exciting new frontier.
Generative Engine Optimization (GEO) is the practice of structuring a brand's content, authority signals and third-party presence so AI systems like ChatGPT, Perplexity and Google AI Overviews cite that brand inside their synthesized answers. Where SEO optimizes for position in a list of links, GEO optimizes for citation inside an answer.
The "before" is fluent, pleasant, completely useless to an extractor. It defines nothing, cites nothing, entity-tags nothing. The "after" is 57 words, leads with the definition, names the category, names competing systems and contrasts itself with SEO. If a model needs a sentence about GEO, the second paragraph is an easy pick. The first is a candidate for paraphrase, not citation.
Your real metric is no longer rank. It's citation rate.
The structural checklist
Apply this to your top ten pages. It is the plainest version of the work:
- Answer first. The first paragraph of any page competing in the answer layer is a clean definition or answer, 40–60 words, no preamble.
- Headings mirror queries. Use H2s and H3s phrased as questions or claims users actually type. "What is X", "How to X", "X vs Y", "Is X worth it".
- Statistics with sources. Numbers move AI visibility by roughly +37% when paired with a source, according to Princeton's GEO study. Invent nothing; link everything.
- Tables beat prose. For any comparison — your product vs alternatives, plan vs plan, option A vs option B — structure it as a table. It is trivially extractable.
- FAQ block at the bottom. Three to six real questions with 40–60 word answers. Add FAQPage schema. This is the single highest-leverage add for most brand pages.
- Schema, not sprinkles. Article or BlogPosting on posts, Organization sitewide with sameAs pointing to every canonical profile, Person schema for the author, Product where relevant.
- Author attribution. Real name, real credentials, real photo. Anonymous content gets cited less — the model treats lack of attribution as lack of authority.
- Freshness signal. Visible "last updated" date. Rewrite high-value pages on a cadence. Stale content loses to dated content.
The third-party side nobody does well
If your entire GEO plan lives on your own domain, you're playing with one hand. Brands are roughly 6.5 times more likely to be cited via third-party sources than from their own site. Translation: a mention on a respected industry publication, a YouTube explainer that ranks, a Reddit thread with genuine depth, a Wikipedia entry if you qualify for one, a detailed review on G2 or Capterra — any of these will pull citations your blog cannot. The move isn't to spam. It is to show up, authentically and frequently, where your category is discussed, so the web's conversation about you becomes the material the model works with.
FAQ
It reshapes it rather than kills it. Independent measurements suggest AI Overviews can reduce informational click-through by up to around 58%, but the remaining clicks are higher-intent — the users who still click are closer to action. Meanwhile, being cited inside the Overview gives brand-level authority even when no click follows. The answer is not to retreat from SEO but to add a GEO layer so you show up inside the answer instead of below it.
ChatGPT's citation behavior leans on third-party sources first, then extractable on-domain content. Prioritize authentic presence on Wikipedia (if you qualify), Reddit, industry publications and YouTube for your category queries, in parallel with the on-page work above. Make sure GPTBot and ChatGPT-User are allowed in your robots.txt — a surprising number of brands are blocked from ChatGPT because a WordPress plugin added a Disallow line nobody reviewed.
Run a 20-query audit once a month. Pick your twenty most important queries. Run each through Google AI Overviews, ChatGPT, Perplexity and Gemini. Record the cited sources in a spreadsheet. Track month-over-month. For scale, tools like Otterly AI, Peec AI, ZipTie and LLMrefs automate share-of-AI-voice across platforms, but the manual audit is where most of the insight actually comes from.
Probably not. Blocking GPTBot, ChatGPT-User, PerplexityBot, ClaudeBot, anthropic-ai, Google-Extended or Bingbot removes you from the corresponding answer engine's universe of possible citations. For most boutique brands and service businesses, the reach lost is bigger than the IP protected. A compromise is to block pure training crawlers (for example CCBot) while allowing the search-facing bots — but even that is usually a net loss in 2026.
A share-of-AI-voice dashboard. For your 20 priority queries, the metric is: on what percentage of them is your brand cited, across ChatGPT, Perplexity, Google AI Overviews and Gemini. Track absolute citations, relative share vs competitors, and sentiment of the surrounding sentence. Keyword rankings still belong in the deck — they feed the model — but citation rate is the KPI that tells you whether buyers are actually meeting your brand inside the answer.
If you optimized for "Google" in 2016 and it paid off, optimizing for "the answer" in 2026 is the same bet — just made earlier than most of your competitors are willing to make it.
Want your 20-query audit?
We run the 20-query AI visibility audit for every new client in the first week. If you'd like to see yours — no commitment, no pitch deck — send us the five queries your buyers are searching and we'll bring the other fifteen.
Start my audit →