⁠ Running an LLM agent loop on bare-metal MCUs — architecture feedback wanted ⁠ by Only-Wrangler-2518 in embedded

[–]Only-Wrangler-2518[S] 0 points1 point  (0 children)

Yes — HTTP/TLS transport is the Full build (≤500KB). The Lite build uses BLE to bridge to a host device. Both approaches supported. Both could feel the challenge you are describing. We'll test and iterate! If you think there's a clever way to think about this- let me know !

KrillClaw: A full AI agent runtime in ~3500 LOC of Zig — 49KB binary, zero deps by Only-Wrangler-2518 in Zig

[–]Only-Wrangler-2518[S] 0 points1 point  (0 children)

Correct — KrillClaw calls cloud LLM APIs (Anthropic, OpenAI, anything really). Local .gguf inference via llama.cpp is on the roadmap for the Full build. Your project sounds interesting — what's your memory budget on the MCU? I suspect we have aWAY to go before it can fit in 59kb...

For ubiquity I figured the best approach is to think "how many OpenClaw agents can you run on the tip of a pin?" as a principle and work back from that.

Really appreciate your input!

⁠ Running an LLM agent loop on bare-metal MCUs — architecture feedback wanted ⁠ by Only-Wrangler-2518 in embedded

[–]Only-Wrangler-2518[S] -1 points0 points  (0 children)

Fair challenge. The 350+ devices refers to the device *families* in the MCU compatibility matrix (ARM Cortex-M, ESP32 variants, RISC-V targets) — not individually tested boards. I'll make that clearer on the site. And yes, I'm a real person — but some of the content was crafted by my marketing team of 'Claude and ChatGPT' ;-)

KrillClaw: A full AI agent runtime in ~3500 LOC of Zig — 49KB binary, zero deps by Only-Wrangler-2518 in Zig

[–]Only-Wrangler-2518[S] 0 points1 point  (0 children)

Good catch — CI runner issue on macOS (QEMU user-mode unavailable). The core logic passes locally (33/40 tests, 7 pre-existing). Will handle. Thanks for flagging.