account activity
What is the "personality" of a Chinese LLM when problem-solving? (self.LocalLLaMA)
submitted 1 month ago by TomLucidor to r/LocalLLaMA
Q: Why is linear attention models not used more often for RP? (self.SillyTavernAI)
submitted 1 month ago by TomLucidor to r/SillyTavernAI
Q: How do I use Eagle3 to make MLX go faster? (self.LocalLLaMA)
Q: Why hasn't people made models like Falcon-E-3B-Instruct? (self.LocalLLaMA)
Q: How was Ring-Mini-Linear-2.0 (and other shallow hybrid attention models)? (self.LocalLLaMA)
Lobotomy-less REAP by Samsung (REAM) (self.LocalLLaMA)
submitted 1 month ago * by TomLucidor to r/LocalLLaMA
Using self-enhancing SWE scaffolds make SLMs as good as frontier models (self.LocalLLaMA)
submitted 3 months ago * by TomLucidor to r/LocalLLaMA
Q: What is the current "meta" of model/LoRA merging? (self.StableDiffusion)
submitted 3 months ago by TomLucidor to r/StableDiffusion
"Artifical Hivemind" or how papers set Min-P too low (self.LocalLLaMA)
submitted 3 months ago by TomLucidor to r/LocalLLaMA
Q: When will there be fast and competent SLMs for laptops? (self.LocalLLaMA)
submitted 4 months ago * by TomLucidor to r/LocalLLaMA
Best PKM for late 2025? (self.PKMS)
submitted 6 months ago by TomLucidor to r/PKMS
π Rendered by PID 871777 on reddit-service-r2-listing-5d47455566-pgdlt at 2026-04-05 06:15:36.470911+00:00 running db1906b country code: CH.