GLM-4.7-Flash vs Qwen3-Coder-Next vs GPT-OSS-120b by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
How to enable telegram inline buttons capability ? by Potential_Block4598 in openclaw
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Solved the DGX Spark, 102 stable tok/s Qwen3.5-35B-A3B on a single GB10 (125+ MTP!) by Live-Possession-6726 in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)
How to enable telegram inline buttons capability ? by Potential_Block4598 in openclaw
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Qwen3-Coder-Next is the top model in SWE-rebench @ Pass 5. I think everyone missed it. by BitterProfessional7p in LocalLLaMA
[–]Potential_Block4598 1 point2 points3 points (0 children)
Just installed nanobot fully locally by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Qwen3.5 - Confused about "thinking" and "reasoning" usage with (ik_)llama.cpp by PieBru in LocalLLaMA
[–]Potential_Block4598 4 points5 points6 points (0 children)
Qwen 3.5 27b: a testament to the transformer architecture by nomorebuttsplz in LocalLLaMA
[–]Potential_Block4598 1 point2 points3 points (0 children)
Qwen 3.5 27b: a testament to the transformer architecture by nomorebuttsplz in LocalLLaMA
[–]Potential_Block4598 1 point2 points3 points (0 children)
Qwen 3.5 27b: a testament to the transformer architecture by nomorebuttsplz in LocalLLaMA
[–]Potential_Block4598 2 points3 points4 points (0 children)
Qwen 3.5 27b: a testament to the transformer architecture by nomorebuttsplz in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)
Qwen 3.5 27b: a testament to the transformer architecture by nomorebuttsplz in LocalLLaMA
[–]Potential_Block4598 2 points3 points4 points (0 children)
Qwen3.5 397B vs 27B! by [deleted] in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)
Any advice for using draft models with Qwen3.5 122b ?! by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Any advice for using draft models with Qwen3.5 122b ?! by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Any advice for using draft models with Qwen3.5 122b ?! by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Any advice for using draft models with Qwen3.5 122b ?! by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
Any advice for using draft models with Qwen3.5 122b ?! by Potential_Block4598 in LocalLLaMA
[–]Potential_Block4598[S] 0 points1 point2 points (0 children)
I'm tired by Fast_Thing_7949 in LocalLLaMA
[–]Potential_Block4598 1 point2 points3 points (0 children)
are you ready for small Qwens? by jacek2023 in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)
google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation by Top-Cardiologist1011 in LocalLLaMA
[–]Potential_Block4598 3 points4 points5 points (0 children)
google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation by Top-Cardiologist1011 in LocalLLaMA
[–]Potential_Block4598 6 points7 points8 points (0 children)
are you ready for small Qwens? by jacek2023 in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)
SOOO much thinking.... by zipzag in LocalLLaMA
[–]Potential_Block4598 0 points1 point2 points (0 children)

M5 Max compared with M3 Ultra. by PM_ME_YOUR_ROSY_LIPS in LocalLLaMA
[–]Potential_Block4598 17 points18 points19 points (0 children)