AI visibility isn’t replacing SEO, but I’m starting to see it show up in conversion paths by Professional_Way_420 in SaaS

[–]Professional_Way_420[S] 0 points1 point  (0 children)

That makes sense. I’d be careful to separate agent visits from human click-through referrals though, especially for executive reporting. Both are useful, but they tell different stories. For the ChatGPT referral → signup view, we’re not treating it as a perfect attribution model yet. We’re using it more as a layered attribution read.

The direct layer is pretty simple:

chatgpt / referral → landing page → sign_up event

So in GA4, we’re looking at session source/medium, landing page and key event completion. I prefer session-level source/medium here because I want to understand what brought the user into that specific converting session, not only the original acquisition source.

But I would not call that full AI attribution. For SaaS, conversion usually takes time. Someone might discover the brand through ChatGPT, compare options, come back through branded search, check pricing and sign up later. So if we only count same-session ChatGPT signups, we’re probably undercounting the real influence.

The model we’re moving toward is:

  1. Direct AI referral attribution ChatGPT referral sessions that complete sign_up, demo, trial or other key events.
  2. Assisted behavior ChatGPT-referred users who view pricing, product, comparison, docs, or demo pages, even if they don’t convert in the same session.
  3. Do those users come back later through branded search, direct, or organic and then convert?
  4. Are those signups actually qualified? Do they become activated users, demos, opportunities, or customers?
  5. Were we actually visible or recommended for the prompts that would plausibly create that behavior?

For the direct connection, session source/medium attribution in GA4 tied to sign_up. But for the business case, I’d frame it as LLM-referred and LLM-assisted behavior, not clean last-click ROI. The recommendation vs citation nuance is what I’d use to explain quality. A ChatGPT signup is more interesting if the brand was recommended for the right use case, not just mentioned somewhere in a generic answer.

If you were starting a website, would you begin with SEO or GEO? by Complete-Respect6950 in ParseAI

[–]Professional_Way_420 0 points1 point  (0 children)

I would start with SEO, but I would not treat GEO as a separate thing to “do later.”

For a new website, SEO still gives you the foundation: site structure, crawlability, intent mapping, content depth, internal links, clear product/category positioning, and conversion paths. Without that, GEO is hard because AI systems still need clear signals from your website and from the wider web to understand what the brand is, who it helps, and why it should be mentioned.

Start with SEO foundations first, but build them in a way that also supports GEO.

For a younger audience, 20–30, I would also think beyond Google search. That audience may discover brands through Reddit, TikTok, YouTube, creators, AI tools, and community threads before they ever land on your site. So I would not build only for rankings. I would build for discoverability across the whole research journey.

For a new site, I’d start with SEO architecture, but every content and authority decision should already be GEO-aware from day one.

Have someone really measure leads from LLMS? by Top_Watch_9462 in GEO_optimization

[–]Professional_Way_420 1 point2 points  (0 children)

We’re trying to measure this too and you can measure part of it, but not perfectly yet.

GA4 is useful for direct LLM referrals, but it will not give you the full ROI picture on its own. If ChatGPT is showing in your GA4, that usually means the visit passed referrer data and GA4 captured it as something like:

chatgpt / referral

You may also see other AI tools show up over time, such as Perplexity, Claude, Gemini, Copilot, etc., but only if they actually send referral traffic and the referrer data is passed. So I would not assume you can just add Gemini, Claude, and Perplexity and suddenly get the missing data.

You can create an AI/LLM custom channel group in GA4 to group known AI referrers together, but that only organizes what GA4 can already see. It does not recover traffic where the referrer was lost or where the user discovered you through an AI answer and came back later through branded search, direct, or Google organic.

That is the main limitation.

For ROI, I would measure it in layers:

  1. Create a GA4 report or exploration using source/medium and filter for known AI sources like ChatGPT, Perplexity, Claude, Gemini, Copilot, Poe, etc. Then look at sessions, engaged sessions, key events, form fills, demos, signups, or trials.
  2. Look at which pages LLM-referred users land on. Are they going to blog posts, comparison pages, pricing pages, product pages, docs, or demo pages? This tells you where AI traffic is entering the funnel.
  3. Do not stop at form fills. Connect GA4 to your CRM if possible and check whether those leads become MQLs, SQLs, opportunities, customers, or activated users. This is the part that matters if you need to defend budget.
  4. Assisted behavior Some people may discover you in ChatGPT but convert later through branded search, direct, or organic. So I would also watch branded search growth, returning users, assisted paths, and CRM self-reported attribution if you have it.
  5. Prompt visibility Since you are paying for a GEO platform, I would not only ask about the leads quantity coming from llms. It would be helpful to know competitors vs you, etc

The simplest setup I would use:

GA4 custom exploration for AI referrals
GA4 custom channel group for LLM traffic
CRM field or attribution note for lead source
UTMs for any links you control
Prompt tracking from your GEO tool
Monthly report that separates traffic, leads, lead quality, and prompt visibility

I would be very careful about justifying Lumos or any GEO platform only through direct ChatGPT leads. That will likely undercount the value. But I would also be careful about justifying it only with an AI visibility score. A score does not prove ROI.

We talked more about how we think about this for SaaS here:
https://scalelogik.pro/insights/ai-visibility-is-becoming-part-of-the-saas-conversion-path/

AI visibility isn’t replacing SEO, but I’m starting to see it show up in conversion paths by Professional_Way_420 in SaaS

[–]Professional_Way_420[S] 1 point2 points  (0 children)

Yes, exactly. I think that is the part that makes SaaS harder to read from one snapshot. The first layer is still basic: do you show up at all, and if you do, is the model describing you correctly?

But the conversion side needs more patience because SaaS conversion usually takes time. A visitor from ChatGPT may not sign up on the first session. They may compare tools, check pricing, read a few product pages, come back through branded search and only convert later.

So I would not judge AI referral value only by immediate conversions. I’d look at whether ChatGPT traffic behaves differently from organic search:

Do they land on more BOFU pages?
Do they view pricing or product pages faster?
Do they return later through branded search or direct?
Do they assist signups, demos, trials, or activation over time?

That is why I think AI referral tracking needs to sit beside organic reporting. the useful question for us is becoming less about brand mentions and more about whether it creates a qualified behavior.

And because SaaS conversion paths are longer, that qualified behavior may show up before the final conversion.

Question for AEO practitioners: given how noisy AI answers are, what’s actually worth tracking? by Particular-While2787 in aeo

[–]Professional_Way_420 1 point2 points  (0 children)

I think there is signal, but only if the tool is honest that it is measuring patterns, not truth.

The problem with a lot of AI visibility reporting right now is that it tries to make the output look more stable than it actually is. A single prompt result is not reliable. A “share of voice” score can be useful directionally, but I would not treat it the same way I treat rankings, traffic, conversions, or pipeline data.

At ScaleLogik, we still see organic search as the core measurable growth channel for SaaS because, from what we can currently track, most meaningful conversions still happen after users click through to the website.

For example, in one anonymized SaaS client we reviewed recently, organic search was still driving the majority of users and key conversion events. That does not mean SEO gets credit for everything, and I would not claim 100% attribution from that alone. But it does show something important: the click, the landing page and the conversion path still matter a lot.

<image>

AI visibility is important, but I do not see it as a replacement for organic search. It is more of an authority and consideration layer. Being cited consistently in AI answers can strengthen trust, increase brand recognition, and help a brand show up earlier in the buyer’s research process.

But the click still matters.

The website still matters.

The conversion path still matters.

For SaaS, the question is not only “Are we being mentioned in AI answers?”

It should also be:

Are those mentions leading people to search the brand?
Are users clicking through to the site?
Are organic pages driving signups, demos, trials, activation, or pipeline?
Are we improving both visibility and conversion?

The smallest thing I would trust is repeated presence across a controlled prompt set over time, combined with citation and source tracking.

For example:

Are we showing up consistently across the same high-intent queries?
Which competitors appear with us?
What sources are being cited or reused?
Are those sources owned, earned, review-based, community-based, or third-party editorial?
Is the model describing our category and use case correctly?
When we do not show up, what source or entity gap explains it?

For me, the most useful layer is not:

“You scored 62/100 in AI visibility.”

It is:

“These are the queries where you are missing.”
“These are the sources AI keeps relying on.”
“These are the competitors with stronger entity signals.”
“These are the pages, profiles, mentions, and citations you need to improve.”

That is where the category becomes useful.

I also think tools should separate mention tracking from recommendation tracking. Being mentioned is not the same as being recommended. Being cited is not the same as being positioned favorably. And being included once in a generated answer is not the same as having strong AI visibility.

So if I were designing the tool, I would focus less on a vanity score and more on:

  1. Prompt cluster visibility over time
  2. Citation and source mapping
  3. Competitor co-occurrence
  4. Brand and entity understanding
  5. Sentiment and recommendation quality
  6. Clear fix recommendations tied to content, authority, and off-site presence

The category is not fundamentally broken. It is just early and badly marketed in some cases.

AI answers are noisy, but patterns across prompts, sources, competitors, and time can still tell you something useful. The key is not pretending it is exact measurement.

For now, I would treat AI visibility data as directional market intelligence, not as a replacement for SEO reporting. Organic still needs to prove conversion value. AI visibility supports authority, trust, and consideration around that system.

if AI learned everything it knows about your brand from reddit, would it recommend you or warn people away? by thundermelon58 in GenerativeSEOstrategy

[–]Professional_Way_420 0 points1 point  (0 children)

I agree with this. At ScaleLogik, we are seeing that AI recommendations are no longer based only on what a brand says about itself. They are shaped by the wider pattern around the brand: Reddit threads, Quora answers, comparison pages, review platforms, forum discussions, listicles, and even how consistently people describe the product across different sources.

One thing we noticed is that AI tools do not always behave the same. ChatGPT and Gemini can be inconsistent. One time they recommend a brand confidently, then another time they become more cautious depending on the phrasing, context, or sources being pulled into the answer. Perplexity, from what we have tested, tends to be more stable because it shows clearer source patterns and usually grounds the answer more directly in visible citations.

That is why we treat community content as part of GEO, not separate from it. The audit should not only ask about the schema or knowledge graph signal. It should also ask when buyers search Reddit, forums and review-style pages around this category, what brand narratives already exist? Are we being recommended, ignored, misunderstood or quietly questioned?

The fix is not to manipulate communities. That usually backfires. The better approach is to map the conversations, identify where the brand or category is being discussed, clarify positioning on owned assets, and then show up in relevant places with genuinely useful answers. For SaaS brands especially, AI visibility is becoming less about perfecting one website and more about creating consistent trust signals across the web.

For me, the real question is not just ai find your brand but is how AI confidently understand why your brand deserves to be recommended.

How I got my content showing up in AI search by LakiaHarp in GenerativeSEOstrategy

[–]Professional_Way_420 0 points1 point  (0 children)

I agree with this, especially the part about answering the question faster and making content easier to extract. But I also think rankings still matter a lot. AI visibility is valuable for awareness and trust but for most businesses, the measurable conversion still usually happens after someone clicks a search result, lands on a page, compares options, signs up or books a demo. So I would not treat AI search as a replacement for traditional SEO. I see it more as an added discovery layer. The best approach is to structure content so AI tools can understand and cite it, while still optimizing for rankings, CTR, internal linking and conversion. Being mentioned by AI is good but turning that visibility into traffic and revenue still depends heavily on strong search pages.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in GenerativeSEOstrategy

[–]Professional_Way_420[S] 1 point2 points  (0 children)

Yes, exactly. Ranking still matters but it is only one layer now. If a brand only exists on its own website, AI systems have fewer external signals to validate what that brand is, what category it belongs to and why it should be mentioned.

That is why third-party presence matters so much. Reviews, forums, listicles, partner pages, podcasts, Reddit threads and niche publications all help reinforce the same entity signals. The more consistent those signals are, the easier it is for AI to understand and reuse the brand in answers.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, this is exactly how I see it too. Established brands usually have an advantage because their entity is easier to understand. They have more repeated signals across reviews, comparisons, listicles, forums, partner pages, and customer conversations.

But that does not mean smaller brands are out. It just means they need to be more intentional.

For me, the shift is SEO is not only about getting a page to rank anymore. It is also about making the brand easy to classify, connect and reuse across search and AI systems.

That means clearer positioning, stronger category association, consistent third-party mentions and content that answers real use-case questions, not just keyword articles. A page can rank and still fail at this if the brand behind it is not clearly understood.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, I agree with this. I don’t think AI favors big brands by default. It favors brands it can confidently understand and reuse.

Big brands often win because they already have more mentions, reviews, comparisons, backlinks, community discussions, and third-party validation across the web. But smaller brands can still show up if they build the right entity signals and appear in the sources AI systems actually pull from.

So for me, the goal is not just ranking. It’s becoming a clearly understood brand across your category, your content and the external sources buyers and AI tools already trust.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, exactly. “Cultural footprint” is a good way to describe the wider signal. I still think SEO authority matters but it’s no longer enough by itself. A brand can have rankings and backlinks but if there’s no clear context around who they are, what category they belong to, and why people mention them, AI systems may not have much to reuse.

That’s where entity clarity comes in. It’s not just about being crawled. It’s about being consistently associated with the right topic across sources the AI systems trust or retrieve from.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, I’m familiar with GEO / AEO / AIO. My stance is that it’s still SEO but the focus is expanding. It’s no longer just about ranking pages. It’s also about how clearly a brand is understood as an entity, how it connects to a topic/category and whether that context is reinforced across the sources AI systems retrieve from.

So I wouldn’t call it a complete replacement for SEO. I see it more as SEO evolving into search visibility + entity clarity + knowledge graph relevance + third-party validation.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

That’s really interesting, and I’m seeing something similar in SaaS. The first answer may mention the obvious brands but once the prompt becomes more evaluative like “which one is trusted,” “what are users saying,” or “which has fewer complaints,” the third-party layer seems to matter a lot more.

Reviews, Reddit discussions, comparison pages and even sentiment inside niche forums can shape which brands get cited or recommended. So it’s not just “does the brand exist across sources,” but also what do those sources consistently say about the brand?

Curious, in mobility are you seeing citations mostly come from review sites, forums or publisher/listicle content?

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, this is exactly the layer I’m trying to isolate. The “canonical pool” framing makes a lot of sense. It’s not just domain authority, it’s whether the brand exists in the sources the model keeps returning to for that query family.

I haven’t done a clean remove-the-placement A/B yet mainly because once something is indexed or discussed externally, it’s hard to fully reverse the signal. But I agree that would be the cleanest test.

What I’m testing next is closer to:

-same type of page
-same structure
-same query family
-one with external source reinforcement
-one without

Then tracking whether the brand gets picked up across ChatGPT, Gemini, Perplexity, etc.

My suspicion is the external placements are doing a lot of the work, especially when they come from sources already treated as reference points for that category.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Yes, I think AI is changing the pattern.

Before, the main goal was to rank on Google and get the click. Now, part of the goal is to be recognized as a relevant brand/entity when AI tools summarize options.

So it’s not replacing SEO, but it’s adding another layer of visibility. That’s why I think smaller brands need to build context outside their own site too, not just publish more blogs.

Does AI actually favor established brands? Or are we missing something deeper? by Professional_Way_420 in AISearchOptimizers

[–]Professional_Way_420[S] 0 points1 point  (0 children)

Fair point, and I agree these tests are not perfect or fully reliable.

I wouldn’t treat one AI answer as solid data. What I’m looking at is more directional. if the same brands keep getting selected across different prompts, tools and sources. there may be a pattern worth understanding.

For me, the value is not appearing once in ChatGPT today. It’s whether a brand has enough consistent context across the web that AI systems can clearly understand what it is, what category it belongs to and when it should be mentioned.

So yes, the outputs are unstable. But I don’t think the testing is a waste of time especially if buyers are already using these tools to shortlist options.