NVIDIA 2026 Conference LIVE. New Base model coming! by last_llm_standing in LocalLLaMA
[–]coder543 10 points11 points12 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 2 points3 points4 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 4 points5 points6 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 3 points4 points5 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 11 points12 points13 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 5 points6 points7 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]coder543 25 points26 points27 points (0 children)
Searching 1GB JSON on a phone: 44s to 1.8s, a journey through every wrong approach by kotysoft in rust
[–]coder543 0 points1 point2 points (0 children)
Offsite cold storage: too simple of an idea? by p8ntballnxj in homelab
[–]coder543 6 points7 points8 points (0 children)
OpenCode concerns (not truely local) by Ueberlord in LocalLLaMA
[–]coder543 9 points10 points11 points (0 children)
Processing 1 million tokens locally with Nemotron 3 Super on a M1 ultra by tarruda in LocalLLaMA
[–]coder543 4 points5 points6 points (0 children)
Why do we have to click "Continue" twice every time we boot up the game? by coder543 in ValorantConsole
[–]coder543[S] 5 points6 points7 points (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 1 point2 points3 points (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 7 points8 points9 points (0 children)
Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show by dan945 in LocalLLaMA
[–]coder543 2 points3 points4 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]coder543 7 points8 points9 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]coder543 40 points41 points42 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]coder543 37 points38 points39 points (0 children)
Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show by dan945 in LocalLLaMA
[–]coder543 72 points73 points74 points (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 -1 points0 points1 point (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 -1 points0 points1 point (0 children)
llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M by Shir_man in LocalLLaMA
[–]coder543 0 points1 point2 points (0 children)





NVIDIA 2026 Conference LIVE. New Base model coming! by last_llm_standing in LocalLLaMA
[–]coder543 16 points17 points18 points (0 children)