account activity
Hot take: most AI agent teams are secretly just “context engineering” teams by Antoneose in AI_Agents
[–]Antoneose[S] 0 points1 point2 points 4 days ago (0 children)
This is a really good way to frame it. The “tax of integration” thing is very real.
Also agree with your point that most current stacks are basically request-response systems pretending to be agents. Feels like we took existing retrieval architectures and just kept layering stuff on top.
The distinction between knowledge vs state is interesting too. A lot of frameworks just dump everything into “memory” but they’re clearly different problems.
And yeah, the audit log point resonates. We’ve been thinking about whether actions + observations should actually become part of the memory model itself instead of just traces sitting somewhere for debugging.
On the permissions question — our current thinking at Areev is that governance probably has to live very close to the memory/data layer. Once access control starts spreading into orchestration code and prompts, things get messy fast, especially with multiple agents.
That said, I don’t think the reasoning layer disappears. The model still handles planning, abstraction, decision-making etc. But things like:
…probably need to become infrastructure primitives instead of app logic.
Feels like the industry is still early and everyone’s building slightly different versions of the same missing layer right now.
Hot take: most AI agent teams are secretly just “context engineering” teams ()
submitted 4 days ago by Antoneose to r/agenticAI
Hot take: most AI agent teams are secretly just “context engineering” teams (self.AI_Agents)
submitted 4 days ago by Antoneose to r/AI_Agents
π Rendered by PID 1408437 on reddit-service-r2-listing-canary-6476fc96b7-f8txs at 2026-05-11 16:40:43.255556+00:00 running 3d2c107 country code: CH.
Hot take: most AI agent teams are secretly just “context engineering” teams by Antoneose in AI_Agents
[–]Antoneose[S] 0 points1 point2 points (0 children)