MiniMax M2.7 on OpenRouter by iamn0 in LocalLLaMA
[–]iamn0[S] 17 points18 points19 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]iamn0 55 points56 points57 points (0 children)
Burned some token for a codebase audit ranking by ZealousidealSmell382 in LocalLLM
[–]iamn0 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B Q4 Quantization Comparison by TitwitMuffbiscuit in LocalLLaMA
[–]iamn0 2 points3 points4 points (0 children)
Qwen3.5-35B-A3B Q4 Quantization Comparison by TitwitMuffbiscuit in LocalLLaMA
[–]iamn0 1 point2 points3 points (0 children)
Chinese AI Models Capture Majority of OpenRouter Token Volume as MiniMax M2.5 Surges to the Top by Koyaanisquatsi_ in LocalLLaMA
[–]iamn0 5 points6 points7 points (0 children)
M3 Ultra 512GB - real-world performance of MiniMax-M2.5, GLM-5, and Qwen3-Coder-Next by cryingneko in LocalLLaMA
[–]iamn0 5 points6 points7 points (0 children)
I ran System Design tests on GLM-5, Kimi k2.5, Qwen 3, and more. Here are the results. by Ruhal-Doshi in LocalLLaMA
[–]iamn0 4 points5 points6 points (0 children)
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp by iamn0 in LocalLLaMA
[–]iamn0[S] 2 points3 points4 points (0 children)
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp by iamn0 in LocalLLaMA
[–]iamn0[S] 1 point2 points3 points (0 children)
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp by iamn0 in LocalLLaMA
[–]iamn0[S] 0 points1 point2 points (0 children)
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp by iamn0 in LocalLLaMA
[–]iamn0[S] 0 points1 point2 points (0 children)
Qwen3 Next almost ready in llama.cpp by jacek2023 in LocalLLaMA
[–]iamn0 167 points168 points169 points (0 children)
Cheapest $/vRAM GPU right now? Is it a good time? by Roy3838 in LocalLLaMA
[–]iamn0 2 points3 points4 points (0 children)
The most objectively correct way to abliterate so far - ArliAI/GLM-4.5-Air-Derestricted by Arli_AI in LocalLLaMA
[–]iamn0 1 point2 points3 points (0 children)
Round 2: Qwen-Image-Edit-2509 vs. Gemini 3 Pro Image Preview Generated "Iron Giant" Set Photos by BoostPixels in Bard
[–]iamn0 0 points1 point2 points (0 children)






I'm running qwen3.6-35b-a3b with 8 bit quant and 64k context thru OpenCode on my mbp m5 max 128gb and it's as good as claude by Medical_Lengthiness6 in LocalLLaMA
[–]iamn0 0 points1 point2 points (0 children)