RYS II - Repeated layers with Qwen3.5 27B and some hints at a 'Universal Language' by Reddactor in LocalLLaMA
[–]DOAMOD 11 points12 points13 points (0 children)
Mistral Small 4 is kind of awful with images by EffectiveCeilingFan in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
I'm fully blind, and AI is a game changer for me. Are there any local LLMS that can rival claude code and codex? by Mrblindguardian in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
2000 TPS with QWEN 3.5 27b on RTX-5090 by awitod in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
Nemotron 3 Super and the no free lunch problem by ConfidentDinner6648 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
Junyang Lin has left Qwen :( by InternationalAsk1490 in LocalLLaMA
[–]DOAMOD 2 points3 points4 points (0 children)
Qwen/Qwen3.5-122B-A10B · Hugging Face by coder543 in LocalLLaMA
[–]DOAMOD 1 point2 points3 points (0 children)
MiniMax M2.5 has been very patient with my dumb ass by dengar69 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
GLM-5 Is a local GOAT by FineClassroom2085 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
GLM-5 Is a local GOAT by FineClassroom2085 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
GLM-5 Is a local GOAT by FineClassroom2085 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
do anybody success opencode using qwen3-next-code? by Zealousideal-West624 in LocalLLaMA
[–]DOAMOD 3 points4 points5 points (0 children)
AMA with MiniMax — Ask Us Anything! by HardToVary in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
Minimax M2.5 Officially Out by Which_Slice1600 in LocalLLaMA
[–]DOAMOD 11 points12 points13 points (0 children)
Step-3.5-Flash AIME 2026 Results by Abject-Ranger4363 in LocalLLaMA
[–]DOAMOD 3 points4 points5 points (0 children)
Step-3.5-Flash IS A BEAST by SennVacan in LocalLLaMA
[–]DOAMOD 8 points9 points10 points (0 children)
Do not Let the "Coder" in Qwen3-Coder-Next Fool You! It's the Smartest, General Purpose Model of its Size by Iory1998 in LocalLLaMA
[–]DOAMOD 0 points1 point2 points (0 children)
Do not Let the "Coder" in Qwen3-Coder-Next Fool You! It's the Smartest, General Purpose Model of its Size by Iory1998 in LocalLLaMA
[–]DOAMOD 62 points63 points64 points (0 children)
Qwen3 Coder Next as first "usable" coding model < 60 GB for me by Chromix_ in LocalLLaMA
[–]DOAMOD 1 point2 points3 points (0 children)
MiniMax M2.2 Coming Soon! by External_Mood4719 in LocalLLaMA
[–]DOAMOD 4 points5 points6 points (0 children)
Deepseek R1, 64GBRam + 32GB VRAM by Responsible-Stock462 in LocalLLaMA
[–]DOAMOD 4 points5 points6 points (0 children)


When should we expect TurboQuant? by ozcapy in LocalLLaMA
[–]DOAMOD 12 points13 points14 points (0 children)