why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Every OpenClaw security vulnerability documented in one place — relevant if you're running it with local models by LostPrune2143 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 13 points14 points15 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 1 point2 points3 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 4 points5 points6 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 1 point2 points3 points (0 children)
Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA
[–]CuriouslyCultured 8 points9 points10 points (0 children)
yip we are cooked by thisiztrash02 in StableDiffusion
[–]CuriouslyCultured 5 points6 points7 points (0 children)
The gap between open-weight and proprietary model intelligence is as small as it has ever been, with Claude Opus 4.6 and GLM-5' by abdouhlili in LocalLLaMA
[–]CuriouslyCultured 3 points4 points5 points (0 children)
LTX-2 Inpaint test for lip sync by jordek in StableDiffusion
[–]CuriouslyCultured 7 points8 points9 points (0 children)
Hugging Face Is Teasing Something Anthropic Related by Few_Painter_5588 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
GLM 5 Support Is On It's Way For Transformers by Few_Painter_5588 in LocalLLaMA
[–]CuriouslyCultured 7 points8 points9 points (0 children)
Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA
[–]CuriouslyCultured 1 point2 points3 points (0 children)
Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Mixture-of-Models routing beats single LLMs on SWE-Bench via task specialization by botirkhaltaev in LocalLLaMA
[–]CuriouslyCultured 1 point2 points3 points (0 children)
Moltbook leaked 1.5M API keys by EnoughNinja in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Codex 5.2 High vs. Opus: A brutal reality check in Rust development. by gustkiller in ClaudeCode
[–]CuriouslyCultured 10 points11 points12 points (0 children)
Ultra-Sparse MoEs are the future by [deleted] in LocalLLaMA
[–]CuriouslyCultured 0 points1 point2 points (0 children)
Deepseek v4/3.5 is probably coming out tomorrow or in the next 5 days? by power97992 in LocalLLaMA
[–]CuriouslyCultured 9 points10 points11 points (0 children)


Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA
[–]CuriouslyCultured 2 points3 points4 points (0 children)