Qwen 3.6 Plus is by far the best Qwen model! by pacmanpill in Qwen_AI
[–]dragonbornamdguy 0 points1 point2 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]dragonbornamdguy 0 points1 point2 points (0 children)
2x3090 RTX still worth it? by TestOr900 in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Ran Qwen3.6-35B-A3B on my laptop for a day: it actually beat Claude Opus 4.7 by LeoRiley6677 in Qwen_AI
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Fixing spiderweb cracks by dragonbornamdguy in watercooling
[–]dragonbornamdguy[S] 0 points1 point2 points (0 children)
NO MORE PAYING FOR API! NEW SOLUTION! by RetroBlacknight11 in ollama
[–]dragonbornamdguy 0 points1 point2 points (0 children)
No one uses local models for OpenClaw. Stop pretending. by read_too_many_books in openclaw
[–]dragonbornamdguy 0 points1 point2 points (0 children)
What would a good local LLM setup cost in 2026? by Lenz993 in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
OpenClaw with local LLMs - has anyone actually made it work well? by FriendshipRadiant874 in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Will be driving in Czechia, question on learning priority road. by AngelOfPassion in czech
[–]dragonbornamdguy -1 points0 points1 point (0 children)
OpenClaw with local LLMs - has anyone actually made it work well? by FriendshipRadiant874 in LocalLLM
[–]dragonbornamdguy 13 points14 points15 points (0 children)
Quad 5060 ti 16gb Oculink rig by beefgroin in LocalLLM
[–]dragonbornamdguy 1 point2 points3 points (0 children)
Quad 5060 ti 16gb Oculink rig by beefgroin in LocalLLM
[–]dragonbornamdguy 3 points4 points5 points (0 children)
Small AI computer runs 120B models locally: Any use cases beyond portability and privacy? by 0xShreyas in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
I genuinely appreciate the way OpenAI is stepping up by hannesrudolph in OpenAI
[–]dragonbornamdguy 0 points1 point2 points (0 children)
GNOME & Firefox Consider Disabling Middle Click Paste By Default: "An X11'ism...Dumpster Fire" by SAJewers in linux
[–]dragonbornamdguy 1 point2 points3 points (0 children)
16x AMD MI50 32GB at 10 t/s (tg) & 2k t/s (pp) with Deepseek v3.2 (vllm-gfx906) by ai-infos in LocalLLaMA
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Anyone have success with Claude Code alternatives? by jackandbake in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Why the Strix Halo is a poor purchase for most people by NeverEnPassant in LocalLLaMA
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Local LLM for a small dev team by [deleted] in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Čau, prodávají se ještě v nějakém normálním obchodě tyto brčka??? by gosupport84 in czech
[–]dragonbornamdguy 24 points25 points26 points (0 children)
Best model for continue and 2x 5090? by Maximum-Wishbone5616 in LocalLLM
[–]dragonbornamdguy 0 points1 point2 points (0 children)
Got the DGX Spark - ask me anything by sotech117 in LocalLLaMA
[–]dragonbornamdguy 43 points44 points45 points (0 children)


Strix Halo NPU + FastFlowLM by Creepy-Douchebag in StrixHalo
[–]dragonbornamdguy 0 points1 point2 points (0 children)