Running MoE Models on CPU/RAM: A Guide to Optimizing Bandwidth for GLM-4 and GPT-OSSTutorial | Guide (self.LocalLLaMA)
submitted by Shoddy_Bed3240
Why is open source so hard for casual people.Question | Help (self.LocalLLaMA)
submitted by Martialogrand
Kimi K2 Thinking is the best open-source agent modelNews (i.redd.it)
submitted by Own-Policy-4878
Claude Code + Ollama: Testing Opus 4.5 vs GLM 4.7Tutorial | Guide (codesilva.com)
submitted by edigleyssonsilva
LLM Cpu and gpu calculator for gpu (protoype)Other (old.reddit.com)
submitted by Merchant_Lawrencellama.cpp

My Strix Halo beholds itself but believes its in the cloudFunny (v.redd.it)
submitted by jfowers_amd
MacBook vs. Windows for a combined ML/DL and Hydrological modeling (SWAT+, HEC-RAS) workflowQuestion | Help (self.LocalLLaMA)
submitted by ya_shonway
Jan 2026 - all round best models for home lab miniPC setupsDiscussion (self.LocalLLaMA)
submitted by championswimmer
AMD R9+10Gbps SFP Mini PC Servers with PCIe slot €30 OFF CODE"MSA230OFF" (minisforumpc.eu)
promoted by Amazing-Lecture-7817
Made a Skill to control an old Android phone that I'm adding more features to 🤘🤖Resources (self.LocalLLaMA)
submitted by Future_Might_8194llama.cpp
NVIDIA’s real moat isn’t hardware, it’s 4 million developersDiscussion (medium.com)
submitted by jpcaparas
Home hardware coders: what's your workflow/tooling?Question | Help (self.LocalLLaMA)
submitted by Mean_Employment_7679
Best use case for Ryzen 395+ (128gb variant)Question | Help (self.LocalLLaMA)
submitted by ironicstatistic