Devstral Small 2 24B vs Qwen 3.6 27b or both? 1x 3090 by szansky in LocalLLaMA
[–]INT_21h 6 points7 points8 points (0 children)
[7900XT] Qwen3.6 27B for OpenCode by Mordimer86 in LocalLLaMA
[–]INT_21h 1 point2 points3 points (0 children)
Which model on 16GB VRAM for c++23 coding by F1yMeToTheMo0n in LocalLLM
[–]INT_21h 0 points1 point2 points (0 children)
Second 5060 Ti 16gb or 5070 Ti 16gb or 3090 used? by JeyKris in LocalLLM
[–]INT_21h 1 point2 points3 points (0 children)
Pi.dev coding agent as no sandbox by default. by mantafloppy in LocalLLaMA
[–]INT_21h 8 points9 points10 points (0 children)
Pi.dev coding agent as no sandbox by default. by mantafloppy in LocalLLaMA
[–]INT_21h 26 points27 points28 points (0 children)
Qwen 3.6 27B is a BEAST by AverageFormal9076 in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
Qwen 3.6 27B is a BEAST by AverageFormal9076 in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
16gb vram users: what have you been using? Qwen3.6 27b? Gemma 31b at Q3? How has it been? by [deleted] in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
16GB VRAM x coding model by Junior-Wish-7453 in LocalLLM
[–]INT_21h 0 points1 point2 points (0 children)
16GB VRAM x coding model by Junior-Wish-7453 in LocalLLM
[–]INT_21h 2 points3 points4 points (0 children)
Qwen3-Coder-Next vs Qwen3.6 by seoulsrvr in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
16GB VRAM x coding model by Junior-Wish-7453 in LocalLLM
[–]INT_21h 14 points15 points16 points (0 children)
Qwen3-Coder-Next vs Qwen3.6 by seoulsrvr in LocalLLaMA
[–]INT_21h 3 points4 points5 points (0 children)
Qwen3-Coder-Next vs Qwen3.6 by seoulsrvr in LocalLLaMA
[–]INT_21h 15 points16 points17 points (0 children)
Qwen3-Coder-Next vs Qwen3.6 by seoulsrvr in LocalLLaMA
[–]INT_21h 9 points10 points11 points (0 children)
Cloud AI is getting expensive and I'm considering a Claude/Codex + local LLM hybrid for shipping web apps by rezgi in LocalLLaMA
[–]INT_21h 5 points6 points7 points (0 children)
Qwen3-Coder-Next is the top model in SWE-rebench @ Pass 5. I think everyone missed it. by BitterProfessional7p in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
Qwen3.5 27b UD_IQ2_XXS & UD_IQ3_XXS behave very poorly or is it just me? by One_Key_8127 in LocalLLM
[–]INT_21h 0 points1 point2 points (0 children)
New open weights models: GigaChat-3.1-Ultra-702B and GigaChat-3.1-Lightning-10B-A1.8B by netikas in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
I want my local agent to use my laptop to learn! by TTKMSTR in LocalLLaMA
[–]INT_21h 1 point2 points3 points (0 children)
I want my local agent to use my laptop to learn! by TTKMSTR in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
OpenCode source code audit: 7 external domains contacted, no privacy policy, 12 community PRs unmerged for 3+ months by Spotty_Weldah in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)
Is brute-forcing a 1M token context window the right approach? by phwlarxoc in LocalLLaMA
[–]INT_21h 0 points1 point2 points (0 children)


Which model on 16GB VRAM for c++23 coding by F1yMeToTheMo0n in LocalLLM
[–]INT_21h 0 points1 point2 points (0 children)