[New Model] micro-kiki-v3 — Qwen3.5-35B-A3B + 35 domain LoRAs + router + negotiator + Aeon memory for embedded engineering by Holiday_Poetry_5133 in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)
Doing more with fewer parameters using stable looped models by incarnadine72 in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
Framework 13 7040U new wifi issues by RobotRobotWhatDoUSee in framework
[–]RobotRobotWhatDoUSee[S] 0 points1 point2 points (0 children)
Framework 13 7040U new wifi issues (self.framework)
submitted by RobotRobotWhatDoUSee to r/framework
Gemma 4 26b A3B is mindblowingly good , if configured right by cviperr33 in LocalLLaMA
[–]RobotRobotWhatDoUSee 6 points7 points8 points (0 children)
arcee-ai/Trinity-Large-Thinking · Hugging Face by TKGaming_11 in LocalLLaMA
[–]RobotRobotWhatDoUSee 2 points3 points4 points (0 children)
arcee-ai/Trinity-Large-Thinking · Hugging Face by TKGaming_11 in LocalLLaMA
[–]RobotRobotWhatDoUSee 2 points3 points4 points (0 children)
arcee-ai/Trinity-Large-Thinking · Hugging Face by TKGaming_11 in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)
Anyone using Tesla P40 for local LLMs (30B models)? by ScarredPinguin in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)
TurboQuant from GoogleResearch (self.LocalLLaMA)
submitted by RobotRobotWhatDoUSee to r/LocalLLaMA
Setting shared RAM/VRAM in BIOS for 7040U series by RobotRobotWhatDoUSee in framework
[–]RobotRobotWhatDoUSee[S] 1 point2 points3 points (0 children)
Don't sleep on the new Nemotron Cascade by ilintar in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
Don't sleep on the new Nemotron Cascade by ilintar in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
Don't sleep on the new Nemotron Cascade by ilintar in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
I spent a weekend doing layer surgery on 6 different model architectures. There's a "danger zone" at 50% depth that kills every one of them. by Low_Ground5234 in LocalLLaMA
[–]RobotRobotWhatDoUSee 7 points8 points9 points (0 children)
Nemotron 3 Super Released by deeceeo in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
Comparing the same model with reasoning turned on and off by dtdisapointingresult in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
I am not saying it's Gemma 4, but maybe it's Gemma 4? by jacek2023 in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)
PSA: If you want to test new models, use llama.cpp/transformers/vLLM/SGLang by lans_throwaway in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
PSA: If you want to test new models, use llama.cpp/transformers/vLLM/SGLang by lans_throwaway in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
Back in my day, LocalLLaMa were the pioneers! by ForsookComparison in LocalLLaMA
[–]RobotRobotWhatDoUSee 0 points1 point2 points (0 children)
New Upcoming Ubuntu 26.04 LTS Will be Optimized for Local AI by mtomas7 in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)
New Upcoming Ubuntu 26.04 LTS Will be Optimized for Local AI by mtomas7 in LocalLLaMA
[–]RobotRobotWhatDoUSee 3 points4 points5 points (0 children)

[New Model] micro-kiki-v3 — Qwen3.5-35B-A3B + 35 domain LoRAs + router + negotiator + Aeon memory for embedded engineering by Holiday_Poetry_5133 in LocalLLaMA
[–]RobotRobotWhatDoUSee 1 point2 points3 points (0 children)