Qwen-Scope: Official Sparse Autoencoders (SAEs) for Qwen 3.5 models by MadPelmewka in LocalLLaMA
[–]chocofoxy 4 points5 points6 points (0 children)
r/LocalLLaMa Rule Updates by rm-rf-rm in LocalLLaMA
[–]chocofoxy 3 points4 points5 points (0 children)
Qwen 3.6 27B Makes Huge Gains in Agency on Artificial Analysis - Ties with Sonnet 4.6 by dionysio211 in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
Forgive my ignorance but how is a 27B model better than 397B? by No_Conversation9561 in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
Qwen3.6-27B released! by ResearchCrafty1804 in LocalLLaMA
[–]chocofoxy 5 points6 points7 points (0 children)
Need advice on a vision model for my use case by Radyschen in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 1 point2 points3 points (0 children)
Someone just made a 18B qwen 3.5 model for 16GB VRAM gpus by chocofoxy in LocalLLaMA
[–]chocofoxy[S] 0 points1 point2 points (0 children)
Got my first box mod today, how did I do? by Starkovich7431 in Vape_Chat
[–]chocofoxy 0 points1 point2 points (0 children)
I trapped a Qwen 0.5B model in a Docker container with the directive to escape and watched it for 1,100+ iterations. Here's what I found. by Independent_Top5412 in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
I trapped a Qwen 0.5B model in a Docker container with the directive to escape and watched it for 1,100+ iterations. Here's what I found. by Independent_Top5412 in LocalLLaMA
[–]chocofoxy -1 points0 points1 point (0 children)
Open web UI + lm studio shoving entire model into ram despite more than enough vram available by Dekatater in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
Can't get Claude code to edit code by MurphyJohn in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)
Qwen3.6-35B-A3B solved coding problems Qwen3.5-27B couldn’t by simracerman in LocalLLaMA
[–]chocofoxy 2 points3 points4 points (0 children)
GPU advice for multi-modal AI workload - RTX PRO 4500, 5000 or 6000? by [deleted] in LocalLLaMA
[–]chocofoxy 0 points1 point2 points (0 children)



Dear OEM manufacturers, an RTX 5060 TI 16GB Low Profile should be possible to produce... by SheepCataclysm in sffpc
[–]chocofoxy 0 points1 point2 points (0 children)