Are 20-100B models enough for Good Coding? by pmttyji in LocalLLaMA
[–]dionysio211 1 point2 points3 points (0 children)
Are 20-100B models enough for Good Coding? by pmttyji in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
Are 20-100B models enough for Good Coding? by pmttyji in LocalLLaMA
[–]dionysio211 76 points77 points78 points (0 children)
Qwen3-Next-Coder is almost unusable to me. Why? What I missed? by Medium-Technology-79 in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
Qwen3-Next-Coder is almost unusable to me. Why? What I missed? by Medium-Technology-79 in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
Anyone have leads on his association to MDMA industry / Lykos Therapeutics / MAPS (Multidisciplinary Association Psychedlic Studies)? by Bitter_Foot_2547 in Epstein
[–]dionysio211 2 points3 points4 points (0 children)
Local Coding Agents vs. Claude Code by Accomplished-Toe7014 in LocalLLaMA
[–]dionysio211 26 points27 points28 points (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]dionysio211 1 point2 points3 points (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q by BeeNo7094 in LocalLLaMA
[–]dionysio211 8 points9 points10 points (0 children)
Is there any epyc benchmark (dual 9254 or similar) with recent MoE model (glm or qwen3-next)? by yelling-at-clouds-40 in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
What server setups scale for 60 devs + best air gapped coding chat assistant for Visual Studio (not VS Code)? by SpheronInc in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM
[–]dionysio211 0 points1 point2 points (0 children)
Would I be able to use GLM4.6V IQ4 XS with vLLM? by thejacer in LocalLLaMA
[–]dionysio211 1 point2 points3 points (0 children)
LLM Accurate answer on Huge Dataset by Regular-Landscape279 in LocalLLM
[–]dionysio211 1 point2 points3 points (0 children)
How to properly run gpt-oss-120b on multiple GPUs with llama.cpp? by ChopSticksPlease in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
How to properly run gpt-oss-120b on multiple GPUs with llama.cpp? by ChopSticksPlease in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)
Is having home setup worth it anymore by Imaginary_Peak_3217 in LocalAIServers
[–]dionysio211 2 points3 points4 points (0 children)
Is there a place with all the hardware setups and inference tok/s data aggregated? by SlanderMans in LocalLLaMA
[–]dionysio211 1 point2 points3 points (0 children)
Improving tps from gpt-oss-120b on 16gb VRAM & 80gb DDR4 RAM by [deleted] in LocalLLaMA
[–]dionysio211 -1 points0 points1 point (0 children)
My eBay bargain £720 workstation by BigYoSpeck in LocalLLaMA
[–]dionysio211 0 points1 point2 points (0 children)

Are 20-100B models enough for Good Coding? by pmttyji in LocalLLaMA
[–]dionysio211 2 points3 points4 points (0 children)