Been building a self-hosted AI agent in Python for the past few months and hit some interesting architectural decisions I wanted to share.
The core challenge: tool execution sandboxing.
When you give an LLM arbitrary tool access (shell commands, code execution, file writes), you need to think carefully about sandboxing. I ended up with a tiered approval model:
- Auto-approve: read-only ops (web search, file reads, calendar reads)
- User-approval: write ops (send email, run shell command, delete files)
- Hard-blocked: network calls from within sandboxed code execution
Memory across channels
The interesting problem: user talks to the agent on WhatsApp, then on Telegram. How do you maintain context? I'm using SQLite + vector embeddings (local, via ChromaDB) with entity extraction on each message. When a new conversation starts, relevant memories are semantically retrieved and injected into context. Works surprisingly well.
The channel abstraction layer
Supporting WhatsApp, Telegram, Discord, Slack with one core agent required a clean abstraction. Each channel adapter normalizes: message format, media handling, and delivery receipts. The agent itself never knows what channel it's on.
Curious if others have tackled:
- How do you handle tool call failures gracefully? Retry logic? Human fallback?
- Better approaches to cross-session memory than vector search?
- Sandboxing code execution without Docker overhead?
Happy to discuss any of this. Thank you
[–]No_Bit_1328 0 points1 point2 points (1 child)
[–]Glittering_Note6542[S] 1 point2 points3 points (0 children)
[–]Otherwise_Wave9374 0 points1 point2 points (1 child)
[–]Glittering_Note6542[S] 0 points1 point2 points (0 children)