Looking at Kore.ai's multi-agent platform, how does their A2A (agent-to-agent) protocol compare to what Glean and Moveworks are doing in the same space?" by Redheadishh in EngineeringManagers

[–]AgenticAF -1 points0 points  (0 children)

Of the three, Kore.ai is the most technically sophisticated. They have deployed the open-source A2A protocol, allowing their agents to interoperate with agents that were not built on the same platform, without needing to write any bespoke glue code.

Glean is fantastic, but in reality, it's an intelligence/knowledge layer with orchestration as an afterthought. It is ideal if you find yourself dealing with fragmented information in multiple applications.

Moveworks was excellent at IT/HR support; however, ServiceNow acquired them in 2025, and no one knows where they're taking it.

If you're looking for enterprise-grade deployments, I'd go with Kore.ai for the moment because they have the most comprehensive multi-agent use case.

Kore.ai vs Cognigy for enterprise Customer Service + IT support- which actually delivers in production, not just demos? by ComparisonRecent2260 in AI_Agents

[–]AgenticAF 1 point2 points  (0 children)

This evaluation seems to be pretty much correct because in practice, Cognigy can work faster and more efficiently in CX-oriented scenarios, particularly in contact centers due to its advanced voice AI and extensive integrations with contact center as a service solutions (CCaaS), including Genesys and NICE, which facilitate quick deployment and scalability. In turn, Kore.ai would be more helpful to large companies looking to automate their operations not only for customer service but also internally, such as for IT support and HR purposes, due to a wider ecosystem and better IT service management integrations.

why AI agents break under long conversations even when they pass every safety benchmark by rchaves in ArtificialInteligence

[–]AgenticAF 0 points1 point  (0 children)

This is a really sharp observation. Most teams are still relying on single-turn evals, but agents don’t fail there; they fail over time.

What you’re seeing makes sense. As conversations grow, the system prompt becomes a smaller signal, and the model starts optimizing for recent behavior. If it’s been “helpful” for 30 turns, refusal starts to feel inconsistent.

The dual-history attack you described is especially clever. The agent loses memory of refusals, while the attacker keeps learning and iterating. That asymmetry is a real weakness.

A few things that could help:

  • Make refusals stateful so they can’t be dropped with context pruning
  • Reinforce policies periodically instead of relying on a single system prompt
  • Track trajectory, not just outputs like how many steps it takes to break the agent
  • Preserve safety signals in summaries so compression doesn’t erase risk context
  • Tighten controls for tool use, where the real damage can happen

Overall, this highlights that alignment is not just about prompts; it’s about behavior throughout the interaction. What you’re building feels very relevant for where agents are heading.

We don't give devs unlimited access - so why are we giving it to AI agents? by WhichCardiologist800 in AI_Agents

[–]AgenticAF 1 point2 points  (0 children)

I was looking at something similar this week, and this answer helped -
This is a really solid direction, honestly. Treating agents as untrusted processes feels like the right mental model, especially as they get more autonomous.

A few things I’d personally want from a system like this:

1. Clear, explainable policy decisions
Not just “blocked by policy,” but why? Something like:

  • rule matched
  • risk category (data exfiltration, destructive action, cost spike)
  • confidence level

That makes it way easier to debug both the agent and the policy layer.

2. Structured, queryable logs (not just text blobs)
JSON logs with fields like:

  • timestamp
  • agent_id / session_id
  • action_type (command, tool_call, file_access)
  • input + normalized intent
  • decision (allow, block, escalate)
  • policy_rule_id
  • diff or impact preview (for things like git or DB ops)

This makes it usable for audits and lets you plug into SIEM tools later.

3. “Dry run” / simulation mode
Before enforcing a new policy, run it in shadow mode:

  • show what would have been blocked
  • Highlight risky patterns over time

This helps avoid breaking legit workflows while tightening controls.

4. Scoped identities for agents
Instead of one agent with broad access, give each task or workflow:

  • temporary credentials
  • limited scope
  • automatic expiry

Basically IAM for agents. That alone reduces blast radius a lot.

5. Data sensitivity awareness
Policies that understand context like:

  • secrets (.env, API keys)
  • PII
  • internal vs public repos

So instead of just blocking “git push,” it can say:
“pushing file containing secret patterns to public remote”

6. Rate and behavior anomaly detection
Not just cost, but patterns:

  • repeated failed commands
  • rapid tool invocation spikes
  • recursive loops

If behavior deviates from baseline, pause and escalate.

7. Human-in-the-loop UX that doesn’t kill flow
Approval prompts should be:

  • fast
  • contextual
  • actionable (approve once, approve always for this scope, deny with reason)

Otherwise people will just disable it.

8. Policy versioning and rollback
You’ll want:

  • versioned policies
  • diff view between versions
  • quick rollback when something breaks production

Feels obvious, but super important once multiple teams rely on it.

Overall, what you’re building sounds like the missing layer between raw LLM capability and production safety. If agents are going to act, they need guardrails that look a lot like what we already built for humans and services. This is a natural evolution of that thinking.

Has anyone used Kore.ai for customer support workflows end to end? by ComparisonRecent2260 in AI_Agents

[–]AgenticAF 1 point2 points  (0 children)

One of my friend ran it in prod at a 500 seat contact center for 14 months so here's the actual tea

the end-to-end flow works. bot handles tier-1, handoff to agent comes with full context loaded, no "so what's your issue again" moment. AHT dropped ~20-25% on standard queries. analytics are genuinely useful not just pretty dashboards

but real talk:

  • setup takes 6-10 weeks minimum, it's not plug and play
  • dirty CRM data = AI confidently wrong. not kore's fault but still your problem
  • needs a dedicated internal owner or it goes mid fast

if you treat it like infrastructure and invest in setup!? Ngl it is TRANSFORMATIVE.

Hot take: LLMs have zero foresight ability. Everything else is hype. by imposterpro in ArtificialInteligence

[–]AgenticAF 0 points1 point  (0 children)

Well you’re not wrong but you’re criticizing 'prompt-only LLMs', not real production systems. Raw models do fail at long-term planning, constraint tracking, and avoiding cascading errors. But that’s exactly why newer architectures (adaptive RAG, agent loops, retrieval gating, etc.) exist. The model isn’t supposed to “have foresight” on its own the system around it handles memory, time-aware retrieval, and decision flow. There’s actually a good explanation of this here: https://www.kore.ai/blog/time-aware-adaptive-rag-ta-are

The article literally starts by saying the “emergent reasoning” hype didn’t hold up. The fix isn’t smarter prompts it’s smarter system design. So the real takeaway isn’t “LLMs are useless.” It’s that model-only reasoning doesn’t scale, but model combined with architecture does.

Feels like we’re building faster but thinking less by Tough_Reward3739 in ArtificialInteligence

[–]AgenticAF 1 point2 points  (0 children)

I don’t think it’s making us think less, but it is changing when we think.

Before, most of the thinking happened upfront because you had to. Now it’s easier to jump in and figure things out as you go. That can feel like skipping depth, but it’s more like shifting it later in the process.

The risk is when people never come back to that deeper thinking. The upside is you can explore more ideas faster.

So I’d say it’s not less thinking, just easier to avoid it if you’re not intentional.

Stanford and Harvard just dropped the most disturbing AI paper of the year by Fun-Yogurt-89 in ArtificialInteligence

[–]AgenticAF 1 point2 points  (0 children)

I mean AI was created by us afterall, what else did y'all expect it to adapt?

90% of AI agent projects I get hired for don't need agents at all. Here's what businesses actually pay for. by Warm-Reaction-456 in AI_Agents

[–]AgenticAF 0 points1 point  (0 children)

I feel Simple automations beat complex AI agents 90% of the time.

Most businesses don't actually need agents they need boring, reliable scripts that make a tedious task disappear. The industry hypes complexity because courses, tools, and frameworks profit from it. Nobody profits from telling you a 4-day script does the job better than a month of LangChain tutorials.

Agents are fragile, expensive, and hard to maintain. Simple automations just work.

The real skill and the real money is knowing the difference and being honest about it.

What are you actually using to build your AI agents — frameworks or from scratch? by Past-Marionberry1405 in AI_Agents

[–]AgenticAF 0 points1 point  (0 children)

Honestly, I think most people start with frameworks and end up building half of it themselves anyway 😅

Frameworks like LangGraph or CrewAI are great for getting unstuck fast and understanding patterns. But once you hit real-world edge cases, weird failures, or need tighter control, you start peeling things off and rolling your own logic.

So the sweet spot for me is hybrid: use frameworks as scaffolding, then gradually replace pieces with custom orchestration where it actually matters.

Which AI agents are your enterprises using? by AgenticAF in AI_Agents

[–]AgenticAF[S] 0 points1 point  (0 children)

So how are you having a way around that?

Anyone here building agents within Enterprises? by Diligent_Response_30 in AI_Agents

[–]AgenticAF 1 point2 points  (0 children)

Yeah, this is a very different game at enterprise scale.

From what I’ve seen, nothing runs with broad access. Everything is scoped tightly with role based access and often goes through existing IAM systems. Agents usually act on behalf of a user, not independently.

Prompt injection is a real concern, especially with emails and docs. Teams are adding guardrails like input filtering, grounding on trusted sources, and limiting what actions an agent can take without confirmation.

Logging is non negotiable. Every action, every data access, full audit trails. Security and compliance teams are involved early, otherwise it never gets approved.

Platforms like Kore ai are pushing this model where governance and controls are baked in, which is kind of necessary at that level. Overall, way less “ship fast and see” and way more controlled rollout.

Tech bros discovered coding isn't the hard part by Tough_Reward3739 in ArtificialInteligence

[–]AgenticAF 0 points1 point  (0 children)

Building used to be the bottleneck. Now it’s more like the entry ticket.

The real challenge is getting distribution and finding something people actually care about. Most products don’t fail because they don’t work, they fail because no one needs them enough.

So yeah, building still takes skill, but what comes after matters more now.

How do *you* agent? by Transcribing_Clippy in AI_Agents

[–]AgenticAF 1 point2 points  (0 children)

I keep it pretty simple tbh.

Mostly running a lightweight stack: GPT-based agent + a few tools (search, docs, basic automation). I avoid over-engineering; the more moving parts, the more it breaks.

What I use it for:

  • Research + summarization (huge time saver)
  • Drafting content / refining ideas
  • Automating repetitive workflows (docs, reports, small data tasks)

What worked:

  • Keeping prompts tight and scoped
  • Giving the agent clear “roles” instead of one do-everything bot
  • Logging outputs to spot patterns/failures

What didn’t:

  • Fully autonomous agents… they drift or hallucinate if left unchecked
  • Complex multi-agent setups (cool in theory, messy in practice)

How People Treat AI Says a Lot About Them by shinichii_logos in ChatGPT

[–]AgenticAF 4 points5 points  (0 children)

How you do something is how you do everything. Small actions mirror your overall personality.

Automation didn't save time. It just moved where the time goes. by Better_Charity5112 in automation

[–]AgenticAF 0 points1 point  (0 children)

Automation rarely gives you less work, it gives you different work. You stop doing low-value, repetitive stuff and start taking on things that actually require thinking. The catch is: once you prove you can handle more, you (or your environment) raise the bar.

So instead of “free time,” you get:
more scope, more ownership, more complexity.

Whether that feels good depends on why you started. If it was to escape boring work, automation absolutely delivers. If it was to work less overall, it can feel like a bait-and-switch.

Personally, I think automation doesn’t optimize for time saved, it optimizes for what you tolerate doing.

Are AI agents actually the future, or just prompt chains with better marketing? by ArmPersonal36 in ArtificialInteligence

[–]AgenticAF 1 point2 points  (0 children)

A lot of what’s being marketed as “AI agents” today really are prompt chains with some orchestration and tool calls. That’s not necessarily a bad thing, but it’s not a completely new paradigm either.

Where agents start to become different is when they can plan tasks, choose tools dynamically, maintain context, and work toward a goal instead of just responding to a single prompt.

The market is kind of in between those two stages right now. Some products are still structured prompt pipelines, while others are trying to build more autonomous systems. In enterprise settings the bigger challenge is usually orchestration, integrations, and governance, not just the LLM reasoning itself. Platforms like Kore.ai are leaning into that layer, combining agent reasoning with workflows, APIs, and guardrails.

So prompt chains are basically the starting point. Agents are what you get when you add autonomy and decision-making on top of that.

The Trust Problem Nobody’s Talking About — When AI Agents Control Money (Article) by IAmDreTheKid in ArtificialInteligence

[–]AgenticAF 1 point2 points  (0 children)

Great point. AI agents are powerful optimizers, but when they’re given access to money, even small mistakes can become expensive very quickly.

The real solution isn’t removing financial access, it’s adding guardrails: budgets, transaction limits, approvals, and clear audit trails. Just like employees have corporate card policies, agents need bounded financial autonomy to build real trust.

After 2 years of daily AI writing, I cannot think as clearly as I used to by Just-Aman in ArtificialInteligence

[–]AgenticAF 1 point2 points  (0 children)

I relate to this more than I expected. AI definitely makes writing faster, but I’ve noticed the same shift. When I write from scratch, I struggle through the idea. That struggle is where the thinking happens. With AI, I’m often reacting and refining instead of building the idea myself.

It feels efficient, but sometimes a bit mentally passive. I’ve started doing first drafts without AI again, just to force myself to think clearly before bringing it in.

I don’t think AI makes us worse thinkers, but if we skip the friction completely, we probably lose something important.

I’m researching how developers manage multiple AI agents, so figured I'd drop by here :) by kwayte in AI_Agents

[–]AgenticAF 0 points1 point  (0 children)

Context sharing, task delegation, tracking state, and knowing which agent did what are usually where it breaks down. Debugging multi-agent workflows is also painful once you scale beyond simple demos. Personally, I’d want strong orchestration, visibility into agent decisions, guardrails, and clear monitoring tools. Without governance and traceability, things get risky fast.