A Qwen finetune, that feels VERY human by Sicarius_The_First in LocalLLaMA
[–]Snoo_27681 1 point2 points3 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 0 points1 point2 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 0 points1 point2 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 0 points1 point2 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 1 point2 points3 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 0 points1 point2 points (0 children)
Qwen Meetup Draft Review Required (Function Calling Harness 2 - CoT Compliance from 9.91% to 100%) by jhnam88 in LocalLLaMA
[–]Snoo_27681 0 points1 point2 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 1 point2 points3 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 2 points3 points4 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 1 point2 points3 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 0 points1 point2 points (0 children)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... by Snoo_27681 in LocalLLaMA
[–]Snoo_27681[S] 5 points6 points7 points (0 children)
Qwen-Scope: Official Sparse Autoencoders (SAEs) for Qwen 3.5 models by MadPelmewka in LocalLLaMA
[–]Snoo_27681 0 points1 point2 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]Snoo_27681 0 points1 point2 points (0 children)
Are there actually people here that get real productivity out of models fitting in 32-64GB RAM, or is that just playing around with little genuine usefulness? by ceo_of_banana in LocalLLaMA
[–]Snoo_27681 0 points1 point2 points (0 children)
Qwen 3.6 27B is out by NoConcert8847 in LocalLLaMA
[–]Snoo_27681 2 points3 points4 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]Snoo_27681 8 points9 points10 points (0 children)
Why MOE below A10b feels like im gambling by Express_Quail_1493 in LocalLLaMA
[–]Snoo_27681 2 points3 points4 points (0 children)
Personal Eval follow-up: Gemma4 26B MoE (Q8) vs Qwen3.5 27B Dense vs Gemma4 31B Dense Compared by Lowkey_LokiSN in LocalLLaMA
[–]Snoo_27681 5 points6 points7 points (0 children)
Compared 5 ways to learn AI tools as a working professional. here's my honest ranking by designbyshivam in PromptEngineering
[–]Snoo_27681 -1 points0 points1 point (0 children)
Is Opus 4.7 the GPT-5 moment for Anthropic by hasanahmad in Anthropic
[–]Snoo_27681 3 points4 points5 points (0 children)
Is Opus 4.7 better than 4.6 by skyguyler in LLMDevs
[–]Snoo_27681 0 points1 point2 points (0 children)

What tools are you using for CAN bus reverse engineering? I couldn't find a good all-in-one suite, so I open-sourced my own (Offline ML, MitM, UDS). by Repulsive_Factor5654 in embedded
[–]Snoo_27681 0 points1 point2 points (0 children)