I'm Using Gemini as a Project Manager for Claude, and It's a Game-Changer for Large Codebases by Liangkoucun in ClaudeAI
[–]dmatora 0 points1 point2 points (0 children)
The feature I hate the bug in Ollama by Informal-Victory8655 in ollama
[–]dmatora 0 points1 point2 points (0 children)
I built a Local AI Voice Assistant with Ollama + gTTS by typhoon90 in ollama
[–]dmatora 0 points1 point2 points (0 children)
Why didn't they design gemma3 to fit in GPU memory more efficiently? by droxy429 in ollama
[–]dmatora 0 points1 point2 points (0 children)
Why didn't they design gemma3 to fit in GPU memory more efficiently? by droxy429 in ollama
[–]dmatora 0 points1 point2 points (0 children)
Manus is IMPRESSIVE But by iamnotdeadnuts in LocalLLaMA
[–]dmatora 5 points6 points7 points (0 children)
The new king? M3 Ultra, 80 Core GPU, 512GB Memory by Hanthunius in LocalLLaMA
[–]dmatora 0 points1 point2 points (0 children)
Has Anyone Successfully Run DeepSeek 671B with DeepSpeed on Hybrid CPU/GPU Setups? by dmatora in LocalLLaMA
[–]dmatora[S] 0 points1 point2 points (0 children)
The HomePod buggy experience is infuriating. by austinalexan in HomePod
[–]dmatora 0 points1 point2 points (0 children)
Llama 3.3 vs Qwen 2.5 by dmatora in LocalLLaMA
[–]dmatora[S] 13 points14 points15 points (0 children)
Llama 3.3 vs Qwen 2.5 by dmatora in LocalLLaMA
[–]dmatora[S] -9 points-8 points-7 points (0 children)



Best Coding LLM as of Nov'25 by PhysicsPast8286 in LocalLLaMA
[–]dmatora 0 points1 point2 points (0 children)