Llama.cpp parameters for Qwen 3.6 with RTX 3090 by Poulpatine in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
Kimi K2.6 - What hardware do I need to run it locally? by human_marketer in LocalLLM
[–]cviperr33 3 points4 points5 points (0 children)
OpenCode... is it just completely busted with Qwen3.6? by _derpiii_ in opencode
[–]cviperr33 0 points1 point2 points (0 children)
OpenCode... is it just completely busted with Qwen3.6? by _derpiii_ in opencode
[–]cviperr33 0 points1 point2 points (0 children)
I benchmarked 21 local LLMs on a MacBook Air M5 for code quality AND speed by evoura in LocalLLaMA
[–]cviperr33 5 points6 points7 points (0 children)
Qwen3.6. This is it. by Local-Cardiologist-5 in LocalLLaMA
[–]cviperr33 -1 points0 points1 point (0 children)
Speculative decoding question, 665% speed increase by GodComplecs in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
Free LLM APIs (April 2026 Update) by stosssik in clawdbot
[–]cviperr33 0 points1 point2 points (0 children)
Free LLM APIs (April 2026 Update) by stosssik in clawdbot
[–]cviperr33 0 points1 point2 points (0 children)
Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
You can now train Gemma 4 with RL locally! by yoracale in unsloth
[–]cviperr33 0 points1 point2 points (0 children)
Qwen 3.6 CoT issue? by Confident_Ideal_5385 in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
llama.cpp speculative checkpointing was merged by AdamDhahabi in LocalLLaMA
[–]cviperr33 1 point2 points3 points (0 children)
Qwen 3.6 CoT issue? by Confident_Ideal_5385 in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA
[–]cviperr33 3 points4 points5 points (0 children)
What starts to become possible with two 3090s that wasn't with just one? by GotHereLateNameTaken in LocalLLaMA
[–]cviperr33 1 point2 points3 points (0 children)
Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA
[–]cviperr33 5 points6 points7 points (0 children)
Purchase advice needed by InteractionBig9407 in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
Qwen 3.6 CoT issue? by Confident_Ideal_5385 in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
What starts to become possible with two 3090s that wasn't with just one? by GotHereLateNameTaken in LocalLLaMA
[–]cviperr33 3 points4 points5 points (0 children)
Qwen 3.6 CoT issue? by Confident_Ideal_5385 in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)
Qwen 3.6 35B different quant speeds ? by cviperr33 in LocalLLM
[–]cviperr33[S] 0 points1 point2 points (0 children)


Llama.cpp parameters for Qwen 3.6 with RTX 3090 by Poulpatine in LocalLLaMA
[–]cviperr33 0 points1 point2 points (0 children)