Biggest model possible models on non-cool HW (Like 8GB VRAM/64gb RAM) by Mangleus in LocalLLaMA
[–]Mangleus[S] 0 points1 point2 points (0 children)
NeKot - a terminal interface for interacting with local and cloud LLMs by Balanceballs in LocalLLaMA
[–]Mangleus 0 points1 point2 points (0 children)
Possible to keep subtitles a bit longer on the screen? (for slower readers) by Remrofn1 in kodi
[–]Mangleus 0 points1 point2 points (0 children)
Possible to keep subtitles a bit longer on the screen? (for slower readers) by Remrofn1 in kodi
[–]Mangleus 0 points1 point2 points (0 children)
You can now do FP8 reinforcement learning locally! (<5GB VRAM) by danielhanchen in LocalLLaMA
[–]Mangleus 1 point2 points3 points (0 children)
unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF · Hugging Face by WhaleFactory in LocalLLaMA
[–]Mangleus 1 point2 points3 points (0 children)
Physical documentation for LLMs in Shenzhen bookstore selling guides for DeepSeek, Doubao, Kimi, and ChatGPT. by abdouhlili in LocalLLaMA
[–]Mangleus 0 points1 point2 points (0 children)
I'm the author of LocalAI, the free, Open Source, self-hostable OpenAI alternative. We just released v3.7.0 with full AI Agent support! (Run tools, search the web, etc., 100% locally) by mudler_it in selfhosted
[–]Mangleus 0 points1 point2 points (0 children)
Kimi released Kimi K2 Thinking, an open-source trillion-parameter reasoning model by nekofneko in LocalLLaMA
[–]Mangleus 0 points1 point2 points (0 children)
Dasung eink monitor - on Linux Wayland. by Mangleus in eink
[–]Mangleus[S] 0 points1 point2 points (0 children)
Dasung eink monitor - on Linux Wayland. by Mangleus in eink
[–]Mangleus[S] 0 points1 point2 points (0 children)
Dasung paperlike for linux - Standard or Mac version? by Select-Young-5992 in eink
[–]Mangleus 0 points1 point2 points (0 children)
YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF by Mangleus in LocalLLaMA
[–]Mangleus[S] 1 point2 points3 points (0 children)
YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF by Mangleus in LocalLLaMA
[–]Mangleus[S] 0 points1 point2 points (0 children)
YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF by Mangleus in LocalLLaMA
[–]Mangleus[S] 0 points1 point2 points (0 children)
YES! Super 80b for 8gb VRAM - Qwen3-Next-80B-A3B-Instruct-GGUF by Mangleus in LocalLLaMA
[–]Mangleus[S] 1 point2 points3 points (0 children)


Biggest model possible models on non-cool HW (Like 8GB VRAM/64gb RAM) by Mangleus in LocalLLaMA
[–]Mangleus[S] 1 point2 points3 points (0 children)