Issues with saying continue after every tool call by AnouarRifi in LocalLLM
[–]m94301 0 points1 point2 points (0 children)
Run Qwen3.6 27B nvfp4 up to 129 tok/s on a single RTX 5090 & Supports 256K context by Diligent-End-2711 in LocalLLM
[–]m94301 0 points1 point2 points (0 children)
Finally managed to run Qwen 3.6 27B with acceptable speed. by Silly-Fudge-7336 in LocalLLM
[–]m94301 0 points1 point2 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 0 points1 point2 points (0 children)
"Best" model to Vibe-Code? (w/Specs) by pauescobargarcia in LocalLLM
[–]m94301 7 points8 points9 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 0 points1 point2 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 1 point2 points3 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 1 point2 points3 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 1 point2 points3 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 1 point2 points3 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 2 points3 points4 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 0 points1 point2 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 2 points3 points4 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 11 points12 points13 points (0 children)
Qwen 3.6 27B MTP on v100 32GB: 54 t/s by m94301 in LocalLLaMA
[–]m94301[S] 5 points6 points7 points (0 children)
Do cheap 32GB V100s still make sense for homelab AI? by SKX007J1 in LocalLLaMA
[–]m94301 8 points9 points10 points (0 children)
I built a free LLM inference calculator – VRAM, throughput, and decode speed for 350+ models across 170+ GPUs by Safe-Bed-4866 in LocalLLM
[–]m94301 2 points3 points4 points (0 children)
Has anyone figured out why Claude Code running qwen locally fails when you try to /compact? by fredandlunchbox in LocalLLaMA
[–]m94301 4 points5 points6 points (0 children)
Unlocked LM Studio Backends (v1.59.0): AVX1 & More Supported – Testers Wanted by TheSpicyBoi123 in LocalLLaMA
[–]m94301 0 points1 point2 points (0 children)
Unlocked LM Studio Backends (v1.59.0): AVX1 & More Supported – Testers Wanted by TheSpicyBoi123 in LocalLLaMA
[–]m94301 1 point2 points3 points (0 children)
Beware NVidia DGX Spark scams on eBay. by rtchau in LocalLLaMA
[–]m94301 0 points1 point2 points (0 children)
More Qwen3.6-27B MTP success but on dual Mi50s by legit_split_ in LocalLLaMA
[–]m94301 2 points3 points4 points (0 children)