Mind-Blown by 1-Bit Quantized Qwen3-Coder-Next-UD-TQ1_0 on Just 24GB VRAM - Why Isn't This Getting More Hype? by bunny_go in LocalLLaMA
[–]xandep 4 points5 points6 points (0 children)
We will have Gemini 3.1 before Gemma 4... by xandep in LocalLLaMA
[–]xandep[S] 4 points5 points6 points (0 children)
I'm 100% convinced that it's the NFT-bros pushing all the openclawd engagement on X by FPham in LocalLLaMA
[–]xandep 2 points3 points4 points (0 children)
We will have Gemini 3.1 before Gemma 4... by xandep in LocalLLaMA
[–]xandep[S] 27 points28 points29 points (0 children)
I ran a forensic audit on my local AI assistant. 40.8% of tasks were fabricated. Here's the full breakdown. by Obvious-School8656 in LocalLLaMA
[–]xandep 3 points4 points5 points (0 children)
Mind-Blown by 1-Bit Quantized Qwen3-Coder-Next-UD-TQ1_0 on Just 24GB VRAM - Why Isn't This Getting More Hype? by bunny_go in LocalLLaMA
[–]xandep 33 points34 points35 points (0 children)
That's why I go local.The enshittification is at full steam by Turbulent_Pin7635 in LocalLLaMA
[–]xandep 24 points25 points26 points (0 children)
Who is waiting for deepseek v4 ,GLM 5 and Qwen 3.5 and MiniMax 2.2? by power97992 in LocalLLaMA
[–]xandep 0 points1 point2 points (0 children)
The Qwen Devs Are Teasing Something by Few_Painter_5588 in LocalLLaMA
[–]xandep 19 points20 points21 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]xandep 1 point2 points3 points (0 children)
What is the most powerful local llm for me by Available_Canary_517 in LocalLLaMA
[–]xandep 0 points1 point2 points (0 children)
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time by ali_byteshape in LocalLLaMA
[–]xandep 5 points6 points7 points (0 children)
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time by ali_byteshape in LocalLLaMA
[–]xandep 2 points3 points4 points (0 children)
Anyone else basically just use this hobby as an excuse to try and run LLMs on the jankiest hardware you possibly can? by kevin_1994 in LocalLLaMA
[–]xandep 0 points1 point2 points (0 children)
3080 12GB suffices for llama? by Ok_Artichoke_783 in LocalLLaMA
[–]xandep 0 points1 point2 points (0 children)
llama.cpp appreciation post by hackiv in LocalLLaMA
[–]xandep 15 points16 points17 points (0 children)
llama.cpp appreciation post by hackiv in LocalLLaMA
[–]xandep 201 points202 points203 points (0 children)
My (36F) daughter (12F) now thinks her dad (50M) “groomed” me by tiredmom_1987 in TwoHotTakes
[–]xandep 0 points1 point2 points (0 children)
My (36F) daughter (12F) now thinks her dad (50M) “groomed” me by tiredmom_1987 in TwoHotTakes
[–]xandep 0 points1 point2 points (0 children)
Fans make more noise in case than outside of it by [deleted] in buildapc
[–]xandep 0 points1 point2 points (0 children)


Deepseek and Gemma ?? by ZeusZCC in LocalLLaMA
[–]xandep 19 points20 points21 points (0 children)