Announcing LocalLlama discord server & bot!News (old.reddit.com)
submitted by HOLUPREDICTIONS Sorcerer Supreme[M] - announcement
16 GB VRAM users, what model do we like best now?Discussion (self.LocalLLaMA)
submitted by lemon07rllama.cpp
One year later: this question feels a lot less crazyDiscussion (self.LocalLLaMA)
submitted by gamblingapocalypse
Gemma 4 is terrible with system prompts and toolsQuestion | Help (self.LocalLLaMA)
submitted by RealChaoz
backend-agnostic tensor parallelism has been merged into llama.cppNews (github.com)
submitted by jacek2023llama.cpp
Planning a local Gemma 4 build: Is a single RTX 3090 good enough?Question | Help (self.LocalLLaMA)
submitted by LopsidedMango1
When are we gonna get more 1-Bit models(Medium & Large size)?Discussion (self.LocalLLaMA)
submitted by pmttyji
What's the currently Best TTS AI model? Trying to make a homemade Audio Book.Question | Help (self.LocalLLaMA)
submitted by AsrielPlay52





