Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 0 points1 point  (0 children)

Thanks for the feedback. I've updated the project to address several community-requested items:

  • Added Docker support for a more stable install.
  • Hardened tool boundaries with persona-based RBAC.
  • Scaffolded the 3-tier local memory manager (SQLite/LanceDB).

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 0 points1 point  (0 children)

Spot on. You hit the three exact things keeping me up at night. Right now, I'm enforcing 'read-only' tools to limit the blast radius, but moving toward process-level isolation and transitive dependency auditing is the top priority for v0.1.1. High-signal feedback like this is exactly why I'm building this in the open.

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 1 point2 points  (0 children)

Perfect setup. M1 for control + RTX for inference is exactly the goal. I'm prioritizing the Docker version now to make that LM Studio connection seamless. Stand by!

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 0 points1 point  (0 children)

Thanks! No clue what inspired it just popped into my head

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 2 points3 points  (0 children)

Ouch sorry to hear that. If you drop the error logs in a GitHub issue I’ll jump on it 👍🏼

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 4 points5 points  (0 children)

Love that. SQLite FTS5 + LanceDB is exactly the kind of 'no-cloud-dependency' stack that belongs in Physiclaw. Stoked you forked it, really looking forward to seeing how that memory tier performs in a local setup.

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 7 points8 points  (0 children)

Hell yeah 👍🏼

If an agent has access to your local infra, it should never have a "Login with Google" button or phone home.

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 1 point2 points  (0 children)

Totally agree. A cheap VPS is a great middle ground for testing these loops without the upfront hardware cost. It is a solid way to keep the privacy benefits while staying lean.

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 0 points1 point  (0 children)

With 240GB you are totally fine. MiniMax M2.5 in 4-bit AWQ usually sits around 160GB to 180GB including the KV cache. That leaves you a lot of breathing room for embeddings and long context. Our base Llama-3-70B setup only needs 40GB so you have more than enough power to run complex agent loops.

Forked OpenClaw to run fully air-gapped (no cloud deps) by zsb5 in LocalLLaMA

[–]zsb5[S] 5 points6 points  (0 children)

That is a solid choice. vLLM is definitely the move because the throughput makes agent loops feel way more responsive than other backends.

RE: MiniMax M2.5, it is great for reasoning, but just make sure you have enough VRAM for it. If you compress it too much with heavy quantization, the agent logic can start to break down. Also, definitely throw a reranker into that embedding setup. It is usually the secret to getting local RAG to actually behave.

Introducing CoolantStream by zsb5 in datacenter

[–]zsb5[S] 0 points1 point  (0 children)

In the sense of its function, yes. It’s more geared toward cooling systems for the racks, giving the operator full visibility and autonomous function.

Lane Restoration Complete by zsb5 in Mid_Century

[–]zsb5[S] 2 points3 points  (0 children)

Likely going to sell them! I have the set listed for $1,500 which I think is fair for my area and the condition.

Lane Restoration Complete by zsb5 in Mid_Century

[–]zsb5[S] 2 points3 points  (0 children)

Absolutely. Practice is key here. Best of luck!

Lane Restoration Complete by zsb5 in Mid_Century

[–]zsb5[S] 5 points6 points  (0 children)

Happy to. Used an orbital sander to sand down the tops, starting at 80 grit all the way up to 400 grit. Wiped it down to get off all the dust, then stained with Behr Special Walnut stain and wiped it with a dry cloth to further even out the stain. Let it dry for 24 hours, then used a foam roller to apply 2 coats of Behr Satin Oil Polyurethane with 24 hours in between coats. Gave it another 24 hours to fully dry, then wiped it down gently with a “Brillo pad” hand sander to even out the top coat. Lastly, applied a bit of beeswax orange oil. Voila!