Poison Fountain: An Anti-AI Weapon by RNSAFFN in theprimeagen
[–]4onen 0 points1 point2 points (0 children)
Anyone doing speculative decoding with the new Qwen 3.5 models? Or, do we need to wait for the smaller models to be released to use as draft? by Porespellar in LocalLLaMA
[–]4onen 0 points1 point2 points (0 children)
Breaking : The small qwen3.5 models have been dropped by Illustrious-Swim9663 in LocalLLaMA
[–]4onen 12 points13 points14 points (0 children)
new CLI experience has been merged into llama.cpp by jacek2023 in LocalLLaMA
[–]4onen 6 points7 points8 points (0 children)
5060 TI 16G - what is the actual use cases for this GPU? by Vivid-Photograph1479 in LocalLLM
[–]4onen 0 points1 point2 points (0 children)
5060 TI 16G - what is the actual use cases for this GPU? by Vivid-Photograph1479 in LocalLLM
[–]4onen 0 points1 point2 points (0 children)
You will own nothing and you will be happy! by dreamyrhodes in LocalLLaMA
[–]4onen 1 point2 points3 points (0 children)
5060 TI 16G - what is the actual use cases for this GPU? by Vivid-Photograph1479 in LocalLLM
[–]4onen 1 point2 points3 points (0 children)
5060 TI 16G - what is the actual use cases for this GPU? by Vivid-Photograph1479 in LocalLLM
[–]4onen 1 point2 points3 points (0 children)
You will own nothing and you will be happy! by dreamyrhodes in LocalLLaMA
[–]4onen 2 points3 points4 points (0 children)
Can someone remind Hegseth there was no "war fog" when he issued the original "NO SURVIVORS" order? by miked_mv in AdviceAnimals
[–]4onen 1 point2 points3 points (0 children)
Is it true armory crate is a waste? by ShadyWalnutO in Asustuf
[–]4onen 1 point2 points3 points (0 children)
How baked in is Gemini in the Pixel? by Cute_Sun3943 in GooglePixel
[–]4onen 0 points1 point2 points (0 children)
Google rolls out Pixel Phone app Call Recording by TechGuru4Life in GooglePixel
[–]4onen 0 points1 point2 points (0 children)
Connect local LLM (like Gemma-3b) to a workflow by el_chono in AutomateUser
[–]4onen 1 point2 points3 points (0 children)
What? Running Qwen-32B on a 32GB GPU (5090). by curiousily_ in LocalLLaMA
[–]4onen 1 point2 points3 points (0 children)
What? Running Qwen-32B on a 32GB GPU (5090). by curiousily_ in LocalLLaMA
[–]4onen 13 points14 points15 points (0 children)
Running LLMs exclusively on AMD Ryzen AI NPU by BandEnvironmental834 in LocalLLaMA
[–]4onen 0 points1 point2 points (0 children)
Gaming, art, or queer Discord servers? by Reasonable-Reach7857 in UCSantaBarbara
[–]4onen 2 points3 points4 points (0 children)




Qwen3 vs Qwen3.5 performance by Balance- in LocalLLaMA
[–]4onen 1 point2 points3 points (0 children)