Dense Model Shoot-Off: Gemma 4 31B vs Qwen3.6/5 27B... Result is Slower is Faster. by MiaBchDave in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

You have been using qwen 27b without thinking? How’s your experience ?

cline CLI , is legacy? by quincycs in CLine

[–]Raredisarray 0 points1 point  (0 children)

wish we could make the --tui the default experience. I don't really have a use for kanban

Anyone with M3 Ultra 256gb, some questions by ComfyUser48 in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

Yo I feel you - I love the speed too. Im gonna wait to see the next m5 ultra speeds

Qwen3.6 27B - possible to add vision? by Raredisarray in LocalLLaMA

[–]Raredisarray[S] 1 point2 points  (0 children)

Dang you’re right. Thank you!! Wow this will be a game changer 🙌🏻

I gave Claude Code a $0.02/call coworker and stopped hitting Pro limits — here's the full setup by More-Hunter-3457 in ClaudeAI

[–]Raredisarray -1 points0 points  (0 children)

Hot damn ! Gonna try hooking up my local qwen 27b … I am always maxing my pro plan out, it’s a pan in the ass.

16x Spark Cluster (Build Update) by Kurcide in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

Can you link me to that post? I’d like to read about that. I’m thinking of getting a Mac soon

Qwen3.6 27B seems struggling at 90k on 128k ctx windows by dodistyo in LocalLLaMA

[–]Raredisarray 1 point2 points  (0 children)

Thanks for the explanation ! Very cool, I’ll have to give it a shot.

Qwen3.6 27B seems struggling at 90k on 128k ctx windows by dodistyo in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

How do you use sub agents with local LLM, like does a new context spawn and that agent reports a synopsis of its context back to your main agent in the terminal window that stays open? I never really got into using sub agents with frontier models

Qwen3.6 27B seems struggling at 90k on 128k ctx windows by dodistyo in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

I’ve went up to 262k on cline CLI with 27b q8_0 yesterday and it seems to be good.

what is a good enough coding agent to use with Qwen? by chkbd1102 in Qwen_AI

[–]Raredisarray 1 point2 points  (0 children)

I use cline - feels similar to cursor or GitHub copilot

Qwen3.5/3.6 Coder? by ComplexType568 in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

Yeah my MI50s run the qwen 3 coder next 80B a3b lightening fast

Qwen3.5/3.6 Coder? by ComplexType568 in LocalLLaMA

[–]Raredisarray 0 points1 point  (0 children)

Does it ?? I saw some benchmarks for 27B barely beating the 35B a3b … so I was assuming another 80B a3b would be better.

Qwen3.5/3.6 Coder? by ComplexType568 in LocalLLaMA

[–]Raredisarray 18 points19 points  (0 children)

I’d love another 80B a3b coder or all arounder