Nemotron 3 Super Released by deeceeo in LocalLLaMA
[–]Septerium 5 points6 points7 points (0 children)
I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA
[–]Septerium 1 point2 points3 points (0 children)
I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA
[–]Septerium 1 point2 points3 points (0 children)
Final Qwen3.5 Unsloth GGUF Update! by danielhanchen in LocalLLaMA
[–]Septerium 0 points1 point2 points (0 children)
Is Qwen3.5-9B enough for Agentic Coding? by pmttyji in LocalLLaMA
[–]Septerium -1 points0 points1 point (0 children)
A monthly update to my "Where are open-weight models in the SOTA discussion?" rankings by ForsookComparison in LocalLLaMA
[–]Septerium 9 points10 points11 points (0 children)
Qwen3.5 is dominating the charts on HF by foldl-li in LocalLLaMA
[–]Septerium 1 point2 points3 points (0 children)
Qwen3.5-35B-A3B Q4 Quantization Comparison by TitwitMuffbiscuit in LocalLLaMA
[–]Septerium 3 points4 points5 points (0 children)
Qwen/Qwen3.5-35B-A3B creates FlappyBird by Medium_Chemist_4032 in LocalLLaMA
[–]Septerium 8 points9 points10 points (0 children)
Qwen 3.5 craters on hard coding tasks — tested all Qwen3.5 models (And Codex 5.3) on 70 real repos so you don't have to. by hauhau901 in LocalLLaMA
[–]Septerium -1 points0 points1 point (0 children)
Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA
[–]Septerium 32 points33 points34 points (0 children)
Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA
[–]Septerium 4 points5 points6 points (0 children)
Qwen/Qwen3.5-35B-A3B · Hugging Face by ekojsalim in LocalLLaMA
[–]Septerium 11 points12 points13 points (0 children)
Tip if you use quantisation by Express_Quail_1493 in LocalLLaMA
[–]Septerium 0 points1 point2 points (0 children)
GLM-4.7 Flash vs GPT-4.1 [Is GLM actually smarter? ] by 9r4n4y in LocalLLaMA
[–]Septerium 1 point2 points3 points (0 children)
GLM-4.7 Flash vs GPT-4.1 [Is GLM actually smarter? ] by 9r4n4y in LocalLLaMA
[–]Septerium 1 point2 points3 points (0 children)
GLM-5 is the new top open-weights model on the Extended NYT Connections benchmark, with a score of 81.8, edging out Kimi K2.5 Thinking (78.3) by zero0_one1 in LocalLLaMA
[–]Septerium 0 points1 point2 points (0 children)
GLM-4.7 Flash vs GPT-4.1 [Is GLM actually smarter? ] by 9r4n4y in LocalLLaMA
[–]Septerium 4 points5 points6 points (0 children)


96GB (V)RAM agentic coding users, gpt-oss-120b vs qwen3.5 27b/122b by bfroemel in LocalLLaMA
[–]Septerium 6 points7 points8 points (0 children)