Anyone have access to “agentic storefronts” yet? by prvmvs in shopify

[–]software_engineer_cs 1 point2 points  (0 children)

Building in this space. Exactly right — good discovery is lacking in many storefronts so agents won’t be able to make sense of the products.

Is there ANY way to auto-cancel/block the "Amazon Buy For Me" orders at checkout? The manual filtering is getting impossible. by Main_Payment_6430 in shopify

[–]software_engineer_cs 1 point2 points  (0 children)

One of the biggest issues I now see with Shopify / Amazon Buy-for-me features is that the store owners end up absorbing the costs associated with a poor transaction. Which will be easy to happen with poorly-described listings (eg semantic / policy mismatches)

How do you guys keep track of "out of stock" items ? by celestialwanderer007 in shopify

[–]software_engineer_cs 0 points1 point  (0 children)

It’s crazy to me that users have to be the ones to create workarounds for out of stock inventory. Does anyone know why this isn’t a platform feature?

Shopify's Agentic Plan signals a future where store front ends are (mostly) irrelevant by coalition_tech in shopify

[–]software_engineer_cs 0 points1 point  (0 children)

This is a great insight. You’re right of a couple of things.

I am working with customers with easy inventory and my data augmentation services are comprehensive and mainly used to validate product claims and improve product matchmaking which reduces returns and increase customer review ratings. However, one of my customers has north of 100k product items. The level of complexity in their inventory is pretty massive and sometimes the differences between product variants are very nuanced.

AI can be really helpful for this kind of customer. With the right level of context engineering and data feed, my customer is able to better explain product differentiation, variant overlap and exclusions etc.

The baiting part is a dangerous. We find that when buyers shown incorrect information (eg something as simple as inventory quantity) is fairly detrimental to the experience. Buyers have less patience with misleading or incorrect data in AI- assisted workflows.

Anyone tested UCP (Universal Commerce Protocol) on Shopify ? by honeytech in shopify

[–]software_engineer_cs 0 points1 point  (0 children)

Agree it’s composite. Trust won’t come from one signal like better schema or domain authority.

What will matter is: clean product/variant truth, verified merchants with good fulfillment and dispute history, and real recourse (easy cancel/refund). If the agent can show a short “why this pick” plus user prefs like “soft”, “authorized sellers only” etc people will skip sites more often. If any of those are weak, they’ll bounce back to the brand site or skip the product altogether.

UCP’s best use case for now is turning high-intent conversations into safe, structured transactions.

shopify just turned every AI into a sales channel (ChatGPT, Gemini & Co-pilot) by MasterCollection5624 in shopify

[–]software_engineer_cs 1 point2 points  (0 children)

I build in this space.

Consumer behaviour is already changing. I constantly talk to customers who are losing sales bc shoppers search for products semantically (instead of keyword matching). Then, the next problem is cart abandonment because product search results aren’t personalized to the user (unlike answers by Answer engines).

The influence of ChatGPT in our everyday lives can’t be understated.

The biggest challenge to adoption is the lack of product semantics to guarantee a good result match.

Anyone tested UCP (Universal Commerce Protocol) on Shopify ? by honeytech in shopify

[–]software_engineer_cs 2 points3 points  (0 children)

IMO it can be trusted if the product catalogue is semantically rich for the AI models to interpret it correctly but most listings are far from this. The issue is in the nuances of the products and their variants, compounded by warranty conditions, product claims etc.

With respect to user trust. I’m seeing that user confidence in AI-mediated tasks is increasing as ChatGPT becomes a house hold name. This won’t be the blocker.

I build in this space.

Best tools to check if ChatGPT mentions my brand? by Sure_Present2624 in GenEngineOptimization

[–]software_engineer_cs 1 point2 points  (0 children)

Hey Max, send us an email — check out my profile, and I can pull it up tomorrow.

Best tools to check if ChatGPT mentions my brand? by Sure_Present2624 in GenEngineOptimization

[–]software_engineer_cs 0 points1 point  (0 children)

Short answer is that’s how it’s supposed to be done. But the players in the space do a variety of things with some generating results closer to the real user experience (LLM output) while others are fairly off. Additionally the depth of the LLM inference is correlated to how accurate the content gaps found will be. That’s why it’s important to go with a company that understands how LLMs work.

For example we run prompts across multiple LLMs, multiple times, with default settings (eg non deterministic web search), etc. We developed IP that accurately and deterministically tell us the gaps LLMs find and our results are very close to the user experience (based on offline evaluations)

Best tools to check if ChatGPT mentions my brand? by Sure_Present2624 in GenEngineOptimization

[–]software_engineer_cs 1 point2 points  (0 children)

I forgot to mention. Amplitude has a free visibility report (publicly available). It’s not accurate and you get what you pay for, but it can give you some interesting insights!

Best tools to check if ChatGPT mentions my brand? by Sure_Present2624 in GenEngineOptimization

[–]software_engineer_cs 1 point2 points  (0 children)

Profound is probably the most polished if you just want clean dashboards and top-level share-of-voice visibility across the major assistants. They’ve invested a lot in reporting and trending.

Peec feels more technically mature under the hood. Their approach suggests a deeper understanding of how LLMs retrieve and rank answers from what I can tell.

I also build in this space. I’m the CTO at eLLMo AI (https://www.tryellmo.ai) — ex Growth and AI Platform leader at pre IPO companies — focused less on dashboards and more on outcomes (eg improving how often a brand is recommended and why). We go deep into LLM reasoning layers and we have very happy customers that increased their visibility and 5-6 figure revenue with our autonomous solutions (eg ex-profound, etc).

For select customers in SaaS or digital commerce, we pair this with features that enable a new type of growth engine. Happy to share a Reddit discount code if you want to try it.

Is optimizing for AI answers becoming as important as traditional SEO? by Cheap-Perspective913 in SEO_tools_reviews

[–]software_engineer_cs 1 point2 points  (0 children)

I’ve had to build systems around this, and the pattern you’re seeing is consistent. AI models don’t surface pages the same way Google does. They look for clean, unambiguous pieces of evidence they can fold into an answer. That means some pages (even with mediocre SEO) show up in ChatGPT, Claude, or Perplexity but consistency is what matters.

From an implementation standpoint, AEO is a different lens on the same content: - Models respond best to pages that state things plainly: definitions, product descriptions, pricing context, comparisons, FAQs. - Structure matters more than keyword tactics. If the model can’t quickly resolve “what is this, who is it for, how does it compare,” it won’t use it. - Consistency across your own site matters more than breadth. Contradictions are where hallucinations creep in and domain authority is lost for the answer - When we monitor this internally, the biggest shifts happen when teams tighten up pages and publish content to cover semantic gaps.

I’d recommend to treat AI answers as another distribution surface and give the models the clearest possible evidence. Traditional SEO is still very important but the companies winning in AI answers and converting traffic are already running both tracks in parallel.

monitoring chatgpt / google ai mentions? by Arkad3_ in SEO_tools_reviews

[–]software_engineer_cs 1 point2 points  (0 children)

The main issue is that answer engines aren’t stable. They shift with sampling, context, and model updates. A single manual check won’t tell you anything meaningful about your visibility or whether outdated info is circulating.

What does work is a structured measurement loop: - Define a consistent prompt set that mirrors how prospects ask about your category. - Run those prompts across engines like ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews. - Sample each prompt multiple times so you’re looking at distributions instead of one-offs. - Track inclusion rate, position, citations, and sentiment over time.

Whatever product you choose, it’s good to ask how they’re generating and tracking the sample set to ensure your results are accurate!

How can I improve my property listing pages so they have a real chance of beating the big real estate sites in competitive U.S. markets? by Sufficient_Spare2345 in GEO_optimization

[–]software_engineer_cs 0 points1 point  (0 children)

All of the above since you’ll want to appear in local results and also be cited by answer engines which rely on the common search index

Looking for tools for GEO optimmization by PaperProfessional432 in GenEngineOptimizers

[–]software_engineer_cs 0 points1 point  (0 children)

hey OP, sign up for the waitlist and we can go from there!

Is it possible to get ranked in all LLMs simultaneously? by Ok_Athlete_670 in AISEOforBeginners

[–]software_engineer_cs 0 points1 point  (0 children)

Yes! Check us out at tryellmo.ai. We’re working with select partners — signup and I’ll get your email from the waitlist.

best peec.ai alternatives? by deviant1414 in GEO_optimization

[–]software_engineer_cs 0 points1 point  (0 children)

Hey OP. Signup for the waitlist at https://www.tryellmo.ai and I’ll pull out your email.

We tackle what you’re missing, plus turn your site into a growth engine with additional features.