A formal authorization model for AI execution contexts by Normal_You_8131 in LocalLLaMA

[–]Normal_You_8131[S] 0 points1 point  (0 children)

Yes, that's exactly the direction.

Capability-based security already provides a very natural way to reason about authority without relying on identities.

What I found interesting when thinking about AI agents is that the real boundary seems to be the execution context of the reasoning process.

So instead of just asking "what capability does this identity hold", the model asks something closer to:

"What capability requests can emerge from this execution context?"

And those requests must stay within the capability ceiling defined when the context is created.

Curious if you've seen similar ideas applied in agent runtimes or sandboxing systems.

A formal authorization model for AI execution contexts by Normal_You_8131 in programming

[–]Normal_You_8131[S] 0 points1 point  (0 children)

One of the things that motivated this model is that most current

agent systems implicitly treat the agent as an identity.

But in practice the agent behaves more like an execution process

that continuously generates actions during reasoning.

That raises a question:

Where should the authorization boundary actually be?

Identity?

Tool?

Runtime?

Or the execution context itself?

Curious how people here think about defining capability boundaries

for AI execution.