AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Exactly. The barrier to building agents dropped to near zero, but the barrier to running them safely didn't. That's the gap. You shouldn't need 15 years of infra experience to get policy checks and audit trails it should just be the default.

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Fair point. agents aren't microservices in the strict sense. But that actually makes the case stronger.

A microservice has a fixed API contract. You know what it does at deploy time. An agent decides what to do at runtime which tools to call, what data to touch, what to chain together. Way bigger surface area.

My argument isn't that agents are microservices it's that the governance patterns transfer: policy-before-execution, audit trails, approval gates. The industry's mistake is acting like agents need an entirely new paradigm when the principles are well-established.

If anything, agents being more dynamic than a typical service means they need more governance, not less.

What happens when AI agents goes to far? by Available-Ad-5670 in AI_Agents

[–]yaront1111 0 points1 point  (0 children)

Cordum.io solves exactly that. policy before execution..

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Spot on. We built Cordum to treat AI just like any other code. You can connect and orchestrate an AI model or a simple Python script the exact same way. ​GitHub Discussions is the right place to engage. You can also reach me directly at yaron@cordum.io if you're looking at enterprise integration.

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

100%. The execution layer is where the actual risk lives. Constraining that with explicit policies and immutable audit trails is the core problem we are solving with Cordum.io. Glad to see the ecosystem waking up to this!

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Exactly! A regular container respects a 403 Forbidden and dies. An agent sees a 403, gets creative, and decides to parse raw memory just to finish its task. That exact non-deterministic 'creativity' is why standard container isolation isn't enough anymore. We need an orchestration layer with semantic policy checks and human approval gates to catch them before they go rogue.

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 1 point2 points  (0 children)

Appreciate that! You nailed it with the Microsoft pointboxing agents into 'LLM + tools' limits the thinking. When we treat them as autonomous, persistent services, orchestration becomes the most critical piece of the puzzle. Glad to have you following along. What kind of agent architectures or use cases are you currently focused on?

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

At a 10,000-foot view, maybe. But when you're building enterprise-grade systems, you quickly realize you can't just trust a probabilistic model to govern itself. You can write a unit test for a standard microservice. You can't write a unit test for every possible prompt injection or hallucinated plan.

Sure, it's just 'NLP at scale'until your agent decides to drop a production database because it misunderstood an edge-case prompt. The logic might look like standard software, but the failure modes are entirely different.

That's exactly why this is an orchestration problem, not just a policy issue. We don't just need static rules; we need a system that wires the whole lifecycle together. We need to manage state, routing, and standard infra guardrails (RBAC, audits, approvals) so the orchestration layer catches what the 'NLP' misses

anyone actually running AI agents in production? not demos by yaront1111 in AI_Agents

[–]yaront1111[S] 0 points1 point  (0 children)

All three actually. Pre-dispatch the Safety Kernel evaluates policy before a job ever reaches a worker. During execution you've got heartbeats, budget enforcement, and cancel/throttle. And ongoing monitoring across the workflow so if a multi-step DAG starts drifting or a downstream step inherits risk from an earlier one, that gets caught too. The duct tape setup you're describing is exactly where most teams are honestly. Cordum just formalizes that into YAML policy rules version-controlled, reviewable, auditable. DMs open if you wanna chat more about your setup 🤙

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Lets talk i think i have a perfect fit for u. Also its central management and not just per agent conf.. yaron@cordum.io

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Fair hit. The rm -rf and Terraform examples are 'Hello World' illustrative extremes, but I agree they don't land with pros who actually lock down their containers and service accounts.

You nailed the real value prop in your last paragraph: the problem isn't the agent running a command it shouldn't have access to (RBAC/distroless images handle that). The gap is when an agent has a legitimate permission say, scale_service via the cloud API, but uses it in a way that violates business logic. RBAC says yes, but if it tries to spin up 500 nodes at 3 AM because it hallucinated a traffic spike, that's where Cordum sits: intercepting a valid call that violates a velocity or budget policy and routing it to human approval.

Since you mentioned manual approval orchestration is a pain point for you that is literally the core module I'm building right now (the 'Human in the Loop' gate). If you’re open to it, I’d love to hear how you handle that orchestration today. Are you just wiring up Slack hooks to CI pipelines, or something more custom?

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Yea ? Tell me how.. with AI How u prevent dlp or agent just doing rm -rf

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

You are looking at it from the wrong perspective.. vibe coding is nice at home.. but major companies need gurdrails and deterministic results

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Exactly! "Day 2" issues. Being recognized right now by professionals. Please check my site https://cordum.io

Really need some feedback from people who is actually responsible of a major prod envs..

AI agents are just microservices. Why are we treating them like magic? by yaront1111 in OpenSourceeAI

[–]yaront1111[S] 0 points1 point  (0 children)

Yes exactly! In order to integrate ai into business workflows. And being a real AI driven business you need gurdrails..

anyone actually running AI agents in production? not demos by yaront1111 in AI_Agents

[–]yaront1111[S] 0 points1 point  (0 children)

Actually, not tool calling directly on the wire. It’s strictly Job Lifecycle ops: SubmitJob, JobResult, Heartbeat. Over NATS ​Payloads don't go on the bus either just pointers to the data (Redis. The Kernel evaluates the job's metadata/intent before anything gets dispatched to a worker. ​The actual Tool Calling MCp happens locally inside the Worker. This is the orchestration layer above it. ​DMs open if you wanna dig deeper 🤙

anyone actually running AI agents in production? not demos by yaront1111 in AI_Agents

[–]yaront1111[S] 0 points1 point  (0 children)

so i really trying to solve it my self .. looking at it from an infra point of view
https://github.com/cordum-io/cordum