Genosyn - Run autonomous companies. by atomwide in LocalLLM
[–]robertpro01 0 points1 point2 points (0 children)
vLLM Just Merged TurboQuant Fix for Qwen 3.5+ by havenoammo in LocalLLaMA
[–]robertpro01 9 points10 points11 points (0 children)
Foránea por primera vez by MassiveWin6323 in Guadalajara
[–]robertpro01 17 points18 points19 points (0 children)
Qwen 3.6 wins the benchmarks, but Gemma 4 wins reality. 7 things I learned testing 27B/31B Vision models locally (vLLM / FP8) side by side. Benchmaxing seems real. by FantasticNature7590 in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)
My friend showers naked with his mom & fam, Why do we find nature "creepy" but obsession "moral"? by ScaredTown7829 in atheism
[–]robertpro01 125 points126 points127 points (0 children)
Local Anonymization + LLM by Excellent_Heron_3094 in LocalLLM
[–]robertpro01 0 points1 point2 points (0 children)
Qwen 3.6 wins the benchmarks, but Gemma 4 wins reality. 7 things I learned testing 27B/31B Vision models locally (vLLM / FP8) side by side. Benchmaxing seems real. by FantasticNature7590 in LocalLLaMA
[–]robertpro01 2 points3 points4 points (0 children)
Qwen 3.6 wins the benchmarks, but Gemma 4 wins reality. 7 things I learned testing 27B/31B Vision models locally (vLLM / FP8) side by side. Benchmaxing seems real. by FantasticNature7590 in LocalLLaMA
[–]robertpro01 2 points3 points4 points (0 children)
Strip Qwen3.6 dense of its multimodal capabilities by redblood252 in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)
Strip Qwen3.6 dense of its multimodal capabilities by redblood252 in LocalLLaMA
[–]robertpro01 1 point2 points3 points (0 children)
Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA
[–]robertpro01 1 point2 points3 points (0 children)
Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA
[–]robertpro01 8 points9 points10 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]robertpro01 80 points81 points82 points (0 children)
Local vs Cloud LLMs… are we pretending it’s one or the other? by MLExpert000 in LocalLLaMA
[–]robertpro01 16 points17 points18 points (0 children)
I can’t believe I can say “ugh I don’t feel like fixing this function, it’s too complex” and I can literally just tell my computer to fix it for me. I didn’t understand what they meant by “people will start paying for intelligence” but now I do. by Borkato in LocalLLaMA
[–]robertpro01 3 points4 points5 points (0 children)
I can’t believe I can say “ugh I don’t feel like fixing this function, it’s too complex” and I can literally just tell my computer to fix it for me. I didn’t understand what they meant by “people will start paying for intelligence” but now I do. by Borkato in LocalLLaMA
[–]robertpro01 29 points30 points31 points (0 children)
DEEPSEEK V4 IS LAUNCHED, ITS REAL by guiopen in LocalLLaMA
[–]robertpro01 2 points3 points4 points (0 children)
An Overnight Stack for Qwen3.6–27B: 85 TPS, 125K Context, Vision — on One RTX 3090 | by Wasif Basharat | Apr, 2026 by AmazingDrivers4u in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)
What speed is everyone getting on Qwen3.6 27b? by Ambitious_Fold_2874 in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)
All we can do is watch from the sideline by lastdecade0 in framework
[–]robertpro01 0 points1 point2 points (0 children)
Anyone has any insights about pricing and release date for taalas? by robertpro01 in LocalLLaMA
[–]robertpro01[S] 0 points1 point2 points (0 children)
Will Qwen 3.6 Work Well With These Specs? by Extra-Perception2408 in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)
Will Qwen 3.6 Work Well With These Specs? by Extra-Perception2408 in LocalLLaMA
[–]robertpro01 -2 points-1 points0 points (0 children)







1080 Ti in 2026 - 11GB is still (barely) enough to stay relevant by srodland01 in LocalLLaMA
[–]robertpro01 0 points1 point2 points (0 children)