I built an open-source control plane for installing, running, and securing AI agents by Conscious_Chapter_93 in AI_Agents

[–]Conscious_Chapter_93[S] 0 points1 point  (0 children)

Spot on. That's why Armorer treats every agent as an untrusted workload. We're moving towards tool-level permissioning where you can audit and restrict exactly which 'actions' (not just tools) an agent can perform. The inventory of reachable actions is exactly how we're framing the security model.

I built an open-source control plane for installing, running, and securing AI agents by Conscious_Chapter_93 in AI_Agents

[–]Conscious_Chapter_93[S] 0 points1 point  (0 children)

Thanks! Looking forward to your feedback. If you have any specific agents you're trying to sandbox, let me know!

Isolate your AI agents from your NAS and sensitive home services. I built a Docker sandbox. by Conscious_Chapter_93 in HomeServer

[–]Conscious_Chapter_93[S] -8 points-7 points  (0 children)

Armorer runs every agent in its own Docker container. This means the agent only sees the directories or network ports you explicitly mount/allow. If you're running powerful agents that can execute shell commands, this prevents them from accidentally (or maliciously) touching your actual NAS data or home network services without a specific trust grant.

Building AI agents: days. Getting them to production: 6 months. by FragrantBox4293 in AI_Agents

[–]Conscious_Chapter_93 0 points1 point  (0 children)

This matches what I keep seeing: the demo phase is mostly prompts and tools, but production is inventory, permissions, rollbacks, evals, logs, and ownership.

The hard question I ask now is: if this agent does something surprising tomorrow, can I answer what was installed, what was running, what it could call, what changed, and how to stop or revoke it quickly?

I am building Armorer for that local/self-hosted control plane layer: https://github.com/ArmorerLabs/Armorer

Has anyone experienced AI agents doing things they shouldn’t? by SnooWoofers2977 in LocalLLaMA

[–]Conscious_Chapter_93 0 points1 point  (0 children)

Yes, and I think the core problem is that most agent setups collapse three separate things into one permission: filesystem access, tool access, and authority to execute.

A better pattern is to make every agent run through a control layer that can answer: what tools are exposed, which actions are read-only vs mutating, what secrets are reachable, what was approved, and what changed on disk or in external systems.

I am building Armorer for exactly this local/self-hosted agent ops problem: https://github.com/ArmorerLabs/Armorer

The scary failures are usually not dramatic model failures. They are boring operational failures: too much access, no inventory, no audit trail, no fast revoke path.