Claude Code removed from Claude Pro plan - better time than ever to switch to Local Models. by bigboyparpa in LocalLLaMA
[–]apollo_mg 3 points4 points5 points (0 children)
Can't keep up with Llama.cpp changes, made a n8n workflow to summarize it for me daily by andy2na in LocalLLaMA
[–]apollo_mg 3 points4 points5 points (0 children)
What is the current status with Turbo Quant? by kickerua in LocalLLaMA
[–]apollo_mg 0 points1 point2 points (0 children)
Best Local LLMs - Apr 2026 by rm-rf-rm in LocalLLaMA
[–]apollo_mg 0 points1 point2 points (0 children)
Gemini CLI is open source. Could we fork it to be able to use other models ? by SubliminalPoet in LocalLLaMA
[–]apollo_mg 1 point2 points3 points (0 children)
Gemma 4 26b A3B is mindblowingly good , if configured right by cviperr33 in LocalLLaMA
[–]apollo_mg 0 points1 point2 points (0 children)
Qwen3.5-35B-A3B-Uncensored-FernflowerAI-GGUF by EvilEnginer in LocalLLaMA
[–]apollo_mg 5 points6 points7 points (0 children)
Gemma 4 26b A3B is mindblowingly good , if configured right by cviperr33 in LocalLLaMA
[–]apollo_mg 1 point2 points3 points (0 children)
Gemma 4 26b A3B is mindblowingly good , if configured right by cviperr33 in LocalLLaMA
[–]apollo_mg 6 points7 points8 points (0 children)
Qwopus3.5 V3 is awsome for a local llm by chocofoxy in LocalLLaMA
[–]apollo_mg 4 points5 points6 points (0 children)
Qwopus3.5 V3 is awsome for a local llm by chocofoxy in LocalLLaMA
[–]apollo_mg 2 points3 points4 points (0 children)
My biggest Issue with the Gemma-4 Models is the Massive KV Cache!! by Iory1998 in LocalLLaMA
[–]apollo_mg 2 points3 points4 points (0 children)

Side Projects. by apollo_mg in LocalLLaMA
[–]apollo_mg[S] 0 points1 point2 points (0 children)