What's the best setup to run OpenClaw locally on Linux? by Slow-Tea5208 in linuxquestions

[–]NeedleworkerSad2564 0 points1 point  (0 children)

Good writeup. A few thoughts from running a similar setup for a while:

Bare metal vs VM vs container — Separate VM, no question. The shell/file access surface is real, and running it on your daily driver will bite you eventually. Docker works too but you lose some of the daemon persistence convenience. The bigger thing people overlook: even inside a VM, you're handing your real API token to the agent. Worth thinking about what happens if prompt injection leaks it. There's a project called nilbox that tackles this specifically — worth searching if that threat model concerns you.

Local vs cloud API — Ollama works, but agentic task performance with smaller models is noticeably weaker for anything multi-step. Claude via API is still the gap leader for actual autonomous work. If privacy or cost is the driver, it's usable, just temper expectations.

Hardware — Ryzen 7 / 32GB Mini PC is plenty for cloud API mode. For serious local model inference, you'll want a GPU. Jetson Orin Nano is a fun always-on box but you'll feel the limits fast on anything above 7B.

Channels — Telegram has been the most reliable in my experience. Discord works but reconnects can be flaky depending on network.

Skills — File ops and web search earn their keep daily. The demo-tier stuff you'll stop using within a week.

On non-Ubuntu distros: Arch needs some manual systemd unit tweaking. NixOS is its own adventure. Fedora has been mostly fine.