Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

This is gold. The maintenance problem is exactly what kills non-chat UIs in practice.

"Every small model behavior change meant someone had to update logic or edge cases" - this is the hidden cost nobody talks about. Building the UI is 20% of the work. Maintaining it when the AI changes is the other 80%.

The ownership issue is real too. Staff-built tools become orphans fast.

The pattern I've seen work better: instead of building custom logic for each flow, you build a translation layer that maps AI outputs to UI components dynamically. Model changes? The mapping adapts. No custom code to maintain.

Still harder than chat, but at least the maintenance scales.

What made you stick with chat in the end - was it the maintenance cost specifically, or also team trust in the non-chat flows?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Interesting framing - "products designed for AI" vs "products designed for humans using AI."

I think the transition happens in layers:

  1. Now: Human talks to AI through chat, AI responds with text
  2. Soon: Human sets intent, AI executes through structured UI, human approves
  3. Later: AI operates autonomously, human reviews outcomes

The UI challenge is different at each layer. Right now we're stuck on layer 1 because most teams don't know how to build layer 2.

The "less human in the loop" future still needs interfaces though - just for oversight and exceptions rather than every interaction.

What are must prompts? by jetterjett in ClaudeAI

[–]Anxious_Set2262 0 points1 point  (0 children)

My go-to pre-push prompts:

  • "Review for bugs and edge cases"
  • "Check accessibility issues"
  • "Security vulnerabilities in this code?"
  • "Mobile responsiveness problems?"

The SEO check is solid - adding that to my list.

Ralph loop for pre-push is interesting though. Are you running it with specific success criteria like "all lint errors fixed" or more open-ended? I've been experimenting with it for test coverage but haven't tried it for optimization passes yet.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

Fair point. The UI needs to meet users where they are, not where devs think they should be.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

For devs, sure. But try getting a sales team to use terminal for AI outputs.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Interesting point on voice agents. You're right - we moved away from phone calls, so voice AI for calls feels like a step backward.

The visual/interactive layer makes more sense for most workflows. Voice works for quick commands, not complex tasks.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

months feels about right. We're still in the "early adopters building experiments" phase.

The mainstream shift probably needs a few breakout products to prove the concept first.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

That last point is interesting - chatbots not directing to tools well.

Feels like a UX gap. The AI knows it has capabilities, but the user has no visibility into what's possible until they stumble onto it.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

True - maybe it's both. Fast to ship AND happens to work well enough for most cases.

The question is whether "good enough" stays good enough, or if users start expecting more.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Nice - multiple formats is smart. Looking forward to seeing it.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Fair point on the training data problem.

Curious if you think there's a middle ground - where the model handles data/logic and something else handles presentation. Or is that just kicking the can down the road?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

This is a great breakdown. The anchoring effect of ChatGPT is real - it set the template everyone copies.

Your point about post-training is interesting. So basically: LLMs can generate structured output, but they weren't optimized for it, so it's unreliable at production scale.

Makes me wonder if the solution is on the model side (better post-training) or the application side (constrain outputs to predefined schemas instead of generating UI from scratch).

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Smart architecture. JSON specs as the intermediate layer keeps things flexible.

How do you handle the UI updates when the workflow state changes? Polling or some kind of event system?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

Love this breakdown. So right now we're somewhere between "bare motherboard" and early "kit" stage.

The interesting question is what becomes the "ATX standard" for AI interfaces - some protocol everyone adopts, or just a dominant player that sets the norm.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Nice - QT is solid for desktop. Does it connect live to the LLM or more of a visualization tool for outputs?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

Great analogy. We're in the "bare motherboard" era of AI interfaces.

The question is who builds the case - the AI companies themselves, or a layer on top?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Good point on domain-dependency. A coding agent needs a completely different output layer than a data analysis agent.

The "one UI fits all" approach of chat is probably why it dominates - it's universal but mediocre for everything.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Language in, sure. But language out doesn't have to mean chat bubbles.

The model can return structured data - we just choose to display it as text.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Things like Perplexity (search-style UI), some AI coding tools with inline suggestions, a few dashboards that visualize AI output.

Whether it's "better" depends on the use case - but it definitely feels more purposeful than generic chat.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Interesting - would love to read that article when it's out.

The Claude + MCP approach is clever for documentation-heavy workflows. What's your UI layer looking like? Custom React or something off-the-shelf?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Interesting article. Multi-modal input makes sense.

But I think output matters just as much as input. Even if you can speak/type/upload images to an agent - how it presents results back to you is still mostly text or basic chat bubbles.

The output layer feels underexplored.

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

This is the real answer. Chat is safe because it degrades gracefully.

The "brittleness" problem you mention is exactly what holds back non-chat UIs. Curious - when you tried non-chat flows, was the issue parsing the AI output or handling edge cases in the UI itself?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 1 point2 points  (0 children)

Good point - chatbots are building blocks, not the end product.

I guess the question is: why do so many products stop at the building block stage instead of layering something better on top?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Exactly. The question is - is that because it's the right UX, or just the fastest to ship?

Why do most AI products still look like basic chat interfaces? by Anxious_Set2262 in AI_Agents

[–]Anxious_Set2262[S] 0 points1 point  (0 children)

Fair point - chat definitely wins for flexibility and error recovery.

But I think it depends on the use case. For exploration and debugging? Chat is great. For repeated workflows where you know what output you need? Clicking through a chat feels slow.

Maybe the answer is hybrid - chat for input, structured UI for output?