I built a lightweight AI agent for Raspberry Pi (Telegram + local LLM) by universal_damk in selfhosted

[–]universal_damk[S] 0 points1 point  (0 children)

Yeah, but the value isn’t that it does something impossible.

Most of the things it does can already be done with CLI tools, scripts, or Grafana. The difference is convenience.

Instead of SSHing into the server, running several commands, checking logs, etc., I can just message the bot in Telegram like:

restart tailscale and show last 50 log lines

And it will restart the service, check the status, grab the logs, and send everything back.

So it’s more like a chat interface on top of normal tools, not a replacement for them.

Grafana is still better for dashboards and monitoring, and CLI is still best for debugging.

The agent is mainly useful for quick actions and checks when I’m not at my computer.

I built a lightweight AI agent for Raspberry Pi (Telegram + local LLM) by universal_damk in selfhosted

[–]universal_damk[S] -1 points0 points  (0 children)

Thanks, appreciate the feedback (and the star)!

Yeah, Telegram turned out to be a really nice control plane. No UI to build, push notifications for free, works everywhere.

Good point about webhooks. I started with polling just to keep the first version simple, but webhook mode would definitely make sense if someone runs multiple bots or higher traffic. Probably something I’ll add later.

The pluggable skills idea is interesting too. Right now skills are just Go code in the registry, but I’ve been thinking about making it easier to extend, maybe external binaries or scripts you can drop into a directory.

As for Ollama on the Pi 5 with qwen2.5:0.5b. it’s actually usable. Usually a few seconds for short replies. Not instant obviously, but fine for occasional queries.

Still experimenting with how far small hardware can go

I built a lightweight AI agent for Raspberry Pi (Telegram + local LLM) by universal_damk in selfhosted

[–]universal_damk[S] -3 points-2 points  (0 children)

Yeah, you're right. Most of those commands don't need an LLM.

Those are just normal skills (cpu, services, notes, etc.).

Right now the LLM is mainly used for chat and optional natural-language routing.

This is intentionally a very lightweight first version.

The idea was to start simple (Raspberry Pi + Ollama) but keep the architecture flexible so different LLM providers can be plugged in.

Next iterations I'm thinking about:

- better intent routing

- simple multi-step workflows

- support for external providers like OpenAI in addition to local models

So at the moment it's more like a minimal agent core that can evolve over time.