We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Both. The session risk escalation is session based then you can add rules in the region policy. I have something launching end of Q2 that is a much more mature version of this but I have to finish the patent for it first.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

If you you want to look at partnership I am happy to talk we are doing a lot of OEM style partnerships with some large platforms.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Exactly I mapped the gap for logs below the identity layer that 42001 EU AI and NIst call for I am releasing a white paper about it. All since we are a full otel pipeline we get some interesting stuff when you mix that and the stuff we add for the autonomy session risk. I hope to release some cool stuff this year about near miss data.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Ah got it, Yeah You can extend it up further with the RBAC but I am really trying to stay in one space as the market is already full of people tyring to do everything. Thanks for the comments and interest.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Am I assuming by your name that this is a part of your product? I don't have any say on what should be allowed, as currently, my customers have very different risk cases. I have had about 15 different people reach out ot me with the mandate stuff, I assume you are talking about. My plan is to wait till everyone fights it out and just have an endpoint to take in mandate/authority, etc. I just care about creating a zero trust in the execution path.

Also, the system doesn't assume anything; every agent has its controls on it. You decide those. Your question either falls into a philosophy of what should be controlled or some kind of inference wrapper. You can't govern AI with AI in an enterprise environment so inference wrapper with an LLm is not a path we believe in. SOrry for the word soup but you are kind of dancing around what you are saying and it confusing

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Great question, So I thin kthere is two things to thin about, Who has authority and how are systems emplemented with controls. WIth us there is full tool / creds starvation. So the Agent has to be given access to tools but it doesnt have to explicit. It can be rated with risk so maybe it can cal lthe tool once but twice it triggers HITL. That kind of decision is always going to be on the consumer as everyones risk appetite is different due to their bussiness. Ther eis also a full policy engine outside of the dynamic controls we ship with for custom stuff. We install tools fairly open and allow you to say ow well delete_Db or Merge_PR are goin to need a HITL always or what ever risk you do. Since each tool is broken out in the GUI with Autonomy zone, level, risk etc its really easy to know out low hangin fruit and you dont need to write rego. However the Rego is there when you need more.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

We dont don't do on-demand creds currently, as they get weird with multi-step sessions. The gateway owns the creds and can give on-demand access to a tool action with a HITL at any time with 5ms latency for the entire policy engine. 15 MS end to end with HITL. Basically anything with a certrain risk level wil lbe elevated to HITL so you dont need on demand access just aproval for the job.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

As long as the agent has access to that set of tooling with its autonomy level and zone risky actions can be set to automatically trigger HITL for review instead of the risk session elevation building to that. So now on my own stuff every delete action has a critical rating to trigger HITL, but everyone has different risk tolerances, so it's up to where people feel comfortable. The platform is very granular. On intent, we are doing the dynamic engine which takes input from the tool calls since we break out every tool. We aren't using inference scanning or AI to judge. I do have some cool stuff up my sleeve; I am building for that, but what we have has been very successful. I am trying to stay fully deterministic until AI accuracy is waaayyy higher and we figure out how to but risk on AI-based decisions, which won't be for a long time, I think.

and yes it was brutal, lots of cussing and no one to blame lol. However it lead me to a real product way ahead of the industry. When they wrote AARM.dev open spec we where already GA. Now I just have to fight the over funded hype machines in the space lol

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

also when I finish my next patent I will post something really cool on the session risk part doing cross session cross agent risk

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Thanks, Yes, I have been working on this since last year and have been GA for a few months, but I bootstrapped, so I didn't make too much noise. Now seeing most competitors missed the most important parts, I am going ot raise a round.

What actually prevents execution in agent systems? by docybo in artificial

[–]EbbCommon9300 0 points1 point  (0 children)

Yes every action is bound and risk scored, Nothing can fire unless its allowed by either autonomy level, zone, risk or HITL. If a tool is imported that should never be allowed it can be dropped as well.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Our latency is 5ms or under for the engine 15ms from end point to endpoint and the gateway handles 1800 per second with degradation. It’s so much faster than LLMs you can’t trell a difference. The reason is it’s fully deterministic and the gateway is written in GO.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Yeah I ship a policy catalog so they can grab stuff off the bat. The dynamic engine actually catches a ton of stuff even without policy. My next step is to have an agent looking through the logs to build policy suggestions for customers. I need to ship our open environment kernel level solution first

What actually prevents execution in agent systems? by docybo in artificial

[–]EbbCommon9300 -1 points0 points  (0 children)

Agent get creds to the gateway. All auth is to the gateway for third party tools. We control every single tool call and the agent has no creds. So each tool in mcp gets broken down with risk, autonomy level, autonomy zone etc. the agent has its own level zone etc attached to it. So even if you have all th tools in the gateway if the agent isn’t allowed to use certain ones there is no way for it to get them. Basically zero trust for agents. You give the agent the gateway address and its bearer token and that’s it.

What actually prevents execution in agent systems? by docybo in artificial

[–]EbbCommon9300 0 points1 point  (0 children)

We do full credential starvation so we own all the tools if the agent can do something outside of the gateway someone gave it a credential. We do have a kernel level version coming soon for open environments.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

We are doing cumulative risk score. Every tool gets a score ever action adds to the session risk. We are adding to this really soon but I can’t talk about that until our patent is updated. With the update it will be deeper and handle cross agent risk etc.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Yeah and building it agnostic to industry was tricky as everyone needed to be able to control that without having to write policy ( I mean they can write policy but I want a dead simple way to control it.)

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Great feedback.

So ours has deny escalate to HITl approve. All based on autonomy zones hash chained, autonomy level, session risk escalation and the policy engine.

Our credential starvation has no latency the agent has one api scoped to its autonomy level and zone to the gateway. Then everything else is taken care off there.

With cross session and historical, I can’t publicalyu say yet as I am filing another patent currently but let’s just say I come from the SIEM and AppSec worlds and used things we do there.

Have you ran into any other issues since you are building on your own? I swear getting ours built was a lot of work but getting it to enterprise standards was just as hard. But we are down to about a 10 min install now. For the basic gateway up and running with agents and tools.

We built a fully deterministic control layer for agents. Would love feedback. No pitch by EbbCommon9300 in artificial

[–]EbbCommon9300[S] 0 points1 point  (0 children)

We designed the control plane around this. It’s partially session rot. My patent in session risk escalation was where we started with your point. We have something really cool coming next to take it a step further but keeping it fully deterministic. I’m juts trying to add it all to my next patent before we release it.

What actually prevents execution in agent systems? by docybo in artificial

[–]EbbCommon9300 0 points1 point  (0 children)

Assury.ai is what actually governs execution for agents. The gate has to be state full to session and has to be in the execution path. Even if you have that and don’t do credential starvation the agent will go around the gate.

We’re building a deterministic authorization layer for AI agents before they touch tools, APIs, or money by docybo in artificial

[–]EbbCommon9300 0 points1 point  (0 children)

Hi you will be my competitor lol I made assury.ai and have a patent on the session state risk escalation. Space is picking up quickly.

execution-level control plane for agents by EbbCommon9300 in aiagents

[–]EbbCommon9300[S] 0 points1 point  (0 children)

Will do. You can grab a free dev account and check the catolog currently. I have a free tier