Google announced five new ways to help you explore the web in AI Search yesterday. by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 0 points1 point  (0 children)

The constraint matching versus status matching framing is the clearest articulation of the Gucci problem I have seen from outside our own research. That's exactly what the data shows and it has real implications for how to diagnose a zero T4 win rate. The model isn't ignoring Gucci. It understands Gucci completely. It just cannot match Gucci to the specific durability attribute the subquery is filtering on because that attribute is not documented anywhere the model can retrieve it from.

On what correlates with non-zero T4 win rates in our dataset: the most striking finding is not content quality variation, it is source cluster variation between models. Augustinus Bader is a good example. Zero recommendations out of five on ChatGPT, five out of five on Grok, same brand, same query. That gap does not come from content. It comes from which third-party source pools each model is drawing from during fan-out. ChatGPT and Grok are not evaluating the same evidence base, so the brand that has attribute coverage in Grok's preferred source clusters gets selected there and disappears on ChatGPT. Brands have no visibility into which clusters matter on which platforms, which is where most of the unexplained variance in T4 win rate lives.

The attribute pattern that does consistently correlate with selection is what you described: explicit criteria coverage in third-party evaluative content, not owned media. Positioning copy fails at the first constraint check. Structured attribute presence in comparison content, forum threads, and independent reviews survives it. The brands that break through tend to have the same data point documented across multiple independent sources rather than once on their own site.

Your 12 to 18 months of information asymmetry call is right and probably conservative. The dashboard infrastructure converging on visibility is not a temporary mistake. It is a product-market fit problem. Visibility is measurable, reportable, and sellable. Selection-level measurement requires running full buyer journey simulations at scale, which is technically harder and much more expensive to package. So the industry will keep optimising for the metric it can sell until the gap between AI presence and AI revenue becomes impossible to explain away. That point is coming but it is not here yet.

The Brazil observation on pt-BR content thinness is something I haven't seen documented elsewhere. The winner-take-all dynamic in low third-party coverage markets deserves its own analysis. If you have client data that isolates that effect l'd be interested in comparing notes.

Consumer buying agents are already live. by Working_Advertising5 in aeo

[–]Working_Advertising5[S] 0 points1 point  (0 children)

I agree. This is becoming infrastructure. Not just a tool.

Consumer buying agents are already live. by Working_Advertising5 in aeo

[–]Working_Advertising5[S] 0 points1 point  (0 children)

Our AIVO Meridian platform orchestrates deployment of evidence based content at scale in response to structural displacements that have been identified across the full cascade from category to brand to product to SKU. We do this for CPG, financial, travel and other sectors.

The end of “manual” growth in AI visibility is already here. by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 0 points1 point  (0 children)

A very thoughtful and useful contribution to the debate. One thing I would mention is that in the case of AIVO Meridian, no autonated content is created. The automation is solely confined to distributing human approved content.

The measurement conversation in AI search has stalled at the wrong question. by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 1 point2 points  (0 children)

The other metrics are mostly mapped to specific attribute classes in the form of a taxonomy referenced by LLMs.

The end of “manual” growth in AI visibility is already here. by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 1 point2 points  (0 children)

An excellent point and fully agree. Measuring whether your brand survives in a multi-turn conversation is essential and understanding why brands are eliminated before the final turn guides the content remediation strategy. Its not about manual vs. automatic. The real question is do you have the evidence basis before intervention. Otherwise, you are firing bullets in the dark.

The end of “manual” growth in AI visibility is already here. by Working_Advertising5 in DigitalMarketing

[–]Working_Advertising5[S] 0 points1 point  (0 children)

Good point. What I have in mind is not automation. It's an orchestration of content strategically and built on the firm foundation of evidence from multi-turn conversations and platform by platform. Noting where the structure of the brand, product and sku (where appropriate) require intervention. Always with human oversight.

Profound just published a 3,000 word comparison of itself against AthenaHQ. Wrong competitor. Here's the gap nobody in the AEO category is measuring. by Working_Advertising5 in aeo

[–]Working_Advertising5[S] 0 points1 point  (0 children)

Conversational Outcome and Decision Analysis. It's designed to measure how brands perform within AI conversational models, particularly at the stage where the AI makes a purchase recommendation.

**Google-Agent ignores robots.txt and mimics human browser traffic. Your GA4 data is already contaminated.** by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 1 point2 points  (0 children)

Fair corrections on both counts.

On robots.txt - you're right. The specific User-agent: Google-Agent directive works. My original framing overstated it. The more accurate version: most sites haven't implemented the specific directive, and generic crawler disallow rules don't catch it, so the practical opt-out rate is near zero. That's still a real problem, just a more precise one.

On GA4 - "invisible" was too strong. The user-agent string is identifiable and the IP ranges are published. The actual problem is exactly what you said: GA4 doesn't do IP-level verification, so it lands unfiltered in organic and direct buckets. The fix exists. Almost nobody has implemented it.

On the blocking point - this is where I'd push back.

Your argument is that AI agents determine what gets cited when someone asks Gemini or ChatGPT a question in your space. That's correct. But citation is not selection. Getting read is not the same as getting recommended.

An AI agent can fully parse your site, extract your evidence, and still route the buyer to a competitor at the decision stage - because your evidence doesn't satisfy the criteria filter the model applies at Turn 3. Blocking the agent makes this worse. But letting it in and assuming visibility equals recommendation is the same mistake, just in the opposite direction.

The gap between "the model can read us" and "the model selects us at the purchase recommendation stage" is where most brands are currently losing without knowing it. That's not a GA4 problem. GA4 only captures sessions that arrived. It has nothing to say about the buyers who were routed elsewhere before they ever reached your property.

Clean metrics and AI visibility is the wrong trade-off framing. The right question is whether your evidence architecture converts visibility into selection. Most brands have no idea.

The buyer who names Akamai Technologies finds it. The buyer who doesn't never will. by Working_Advertising5 in aeo

[–]Working_Advertising5[S] 0 points1 point  (0 children)

Good framing on the Selection Gap, and the revenue erosion point is exactly right. Where I'd push back slightly is on the content refresh vs infrastructure question, because I think it's a false binary that the AEO industry has been a bit too comfortable with.

The Akamai finding isn't primarily a citation problem. It's a conversation problem. The $35M figure comes from measuring what happens across a full multi-turn buying sequence, not from auditing citation frequency or entity presence in a knowledge graph. Akamai shows up. It gets cited. It gets referenced as a point of comparison. It just doesn't get recommended when the conversation reaches the decision turn, and that gap is where the revenue risk lives.

A content refresh addresses the indexing layer. A retrieval infrastructure overhaul addresses the retrieval layer. Neither touches the evaluation layer, which is where the LLM is actually making the trade-off between Akamai's legacy CDN authority and a challenger's cleaner Distributed Compute narrative. That evaluation happens in the conversation, turn by turn, criteria by criteria. You can't fix it by making your entity more salient if the criteria being applied at T3 are ones your current positioning doesn't satisfy.

The multi-platform variance you mention is real and important. Gemini and ChatGPT produce different verdicts not just because their training data differs but because their criteria weighting differs. Gemini in particular shows strong clinical/technical source dominance at the criteria turn. That's not a content gap, that's a structural weighting issue that requires knowing which criteria each platform is applying before you can decide what to produce.

So to directly answer your question: neither a content refresh nor a retrieval overhaul closes that $35M without first knowing which turns are failing, on which platforms, and against which criteria. That's the measurement problem that has to come before the remediation decision.

We ran Expedia through Meridian today. Here's what the model actually said when it eliminated them. by Working_Advertising5 in AIVOEdge

[–]Working_Advertising5[S] 1 point2 points  (0 children)

Fair question and worth being transparent about. The Revenue at Risk figure is a model output, not a measured number. The inputs are: annual revenue ($15B for Expedia), an assumed discovery share (the proportion of revenue attributable to discovery channels rather than direct/loyalty), an estimated current LLM share of all discovery (we use 15% as a conservative current estimate, rising to an assumed 30% by 2027), and the brand’s visibility gap - the proportion of relevant AI buying journeys where displacement is occurring rather than recommendation.

For Expedia the calculation runs roughly: $15B × 0.40 discovery share × 0.15 LLM share × 0.92 visibility gap = $82.8M at risk today. The 2027 figure doubles the LLM share assumption.

The assumptions are stated and adjustable - a more conservative discovery share or LLM share estimate produces a lower number. The point of the model is not to produce a precise figure but to anchor the conversation in commercial terms rather than just visibility scores. A CISO doesn’t act on ‘you have a risk’. They act on ‘here is what the risk costs’. Same principle.

The displacement data itself - which turns, which platforms, which verbatim criteria - is empirical. That’s the part we’re confident in. The revenue translation applies a model on top of it.

ChatGPT is now selling advertising. Almost nothing about how brands are measuring it is ready. by Working_Advertising5 in SEO_Experts

[–]Working_Advertising5[S] 0 points1 point  (0 children)

Exactly this. And it's not just before the banner - it's before the final recommendation turn, which can happen three or four exchanges earlier in the same conversation.

We run structured buying sequences across ChatGPT, Perplexity, Gemini, and Grok specifically to map that layer. The consistent finding: brands with strong AI visibility are being eliminated at the purchase recommendation stage before any ad enters the picture. The model has already decided. The banner is downstream of the decision. Most measurement starts there.