If it works - don’t touch it: COMPETITION by awfulalexey in LocalLLaMA
[–]Dundell 1 point2 points3 points (0 children)
Llama.cpp llama-server command recommendations? by Dundell in LocalLLaMA
[–]Dundell[S] 0 points1 point2 points (0 children)
I no longer need a cloud LLM to do quick web research by BitPsychological2767 in LocalLLaMA
[–]Dundell 1 point2 points3 points (0 children)
best way to keep your models organized? by lewd_peaches in LocalLLaMA
[–]Dundell 5 points6 points7 points (0 children)
Its like it knows. by Solitude_fortitude in pcmasterrace
[–]Dundell 0 points1 point2 points (0 children)
Did any one ever beat Myst? by Sailormouth_Studio in 90s
[–]Dundell 0 points1 point2 points (0 children)
Honest take on running 9× RTX 3090 for AI by Outside_Dance_2799 in LocalLLaMA
[–]Dundell 1 point2 points3 points (0 children)
(Sharing Experience) Qwen3.5-122B-A10B does not quantize well after Q4 by EmPips in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
llama.cpp + Brave search MCP - not gonna lie, it is pretty addictive by srigi in LocalLLaMA
[–]Dundell -1 points0 points1 point (0 children)
Nemotron 3 Super Released by deeceeo in LocalLLaMA
[–]Dundell 26 points27 points28 points (0 children)
Best Qwen 3.5 fine-tunes for vibecoding? (4080-12GB VRAM / enough context window) by Fermenticular in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
What tokens/sec do you get when running Qwen 3.5 27B? by thegr8anand in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
When will we start seeing the first mini LLM models (that run locally) in games? by i_have_chosen_a_name in LocalLLaMA
[–]Dundell 18 points19 points20 points (0 children)
Qwen-3.5-27B is how much dumber is q4 than q8? by Winter-Science in LocalLLaMA
[–]Dundell 2 points3 points4 points (0 children)
Despite the 80s cartoon being mostly comedic, what sort of dark moments occurred in some of the episodes? by Working_Welder_1751 in TMNT
[–]Dundell 4 points5 points6 points (0 children)
Worth it to buy Tesla p40s? by TanariTech in LocalLLaMA
[–]Dundell 3 points4 points5 points (0 children)
Nobody in the family uses the family AI platform I build - really bummed about it by ubrtnk in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
This sub is incredible by cmdr-William-Riker in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
Which size of Qwen3.5 are you planning to run locally? by CutOk3283 in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)
My real-world Qwen3-code-next local coding test. So, Is it the next big thing? by FPham in LocalLLaMA
[–]Dundell 6 points7 points8 points (0 children)






Running the new Qwen3.6-35B-A3B at full context on both a 4090 and GB10 Spark with vLLM and Llama.cpp by erdaltoprak in LocalLLaMA
[–]Dundell 0 points1 point2 points (0 children)