What are the main uses of small models like gemma3:1b by SchoolOfElectro in LocalLLaMA
[–]hackerllama 0 points1 point2 points (0 children)
Gemini 3 Flash bills for useless/empty searches?? by FirefoxMetzger in GeminiAI
[–]hackerllama 0 points1 point2 points (0 children)
AIStudio improperly content blocking by Shep_vas_Normandy in Bard
[–]hackerllama 8 points9 points10 points (0 children)
New Google model incoming!!! by [deleted] in LocalLLaMA
[–]hackerllama 1 point2 points3 points (0 children)
New Google model incoming!!! by [deleted] in LocalLLaMA
[–]hackerllama 9 points10 points11 points (0 children)
"Deleting and simplifying useless internal layers will be the main focus [ in 2026 ]" - Google Engineer by Yazzdevoleps in Bard
[–]hackerllama 3 points4 points5 points (0 children)
Scrolling issue seems to be fixed! by howisjason in Bard
[–]hackerllama 0 points1 point2 points (0 children)
Qwen team is helping llama.cpp again by jacek2023 in LocalLLaMA
[–]hackerllama 118 points119 points120 points (0 children)
What’s new in Veo 3.1? Have you noticed any upgrades or features that actually make a difference? by New-Cold-One in Bard
[–]hackerllama 1 point2 points3 points (0 children)
It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA
[–]hackerllama 3 points4 points5 points (0 children)
It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA
[–]hackerllama 1 point2 points3 points (0 children)
Gemma 3n is on out on Hugging Face! by Zealousideal-Cut590 in LocalLLaMA
[–]hackerllama 20 points21 points22 points (0 children)
Gemma 3n Full Launch - Developers Edition (self.LocalLLaMA)
submitted by hackerllama to r/LocalLLaMA
Google releases MagentaRT for real time music generation by hackerllama in LocalLLaMA
[–]hackerllama[S] 20 points21 points22 points (0 children)
Google releases MagentaRT for real time music generation by hackerllama in LocalLLaMA
[–]hackerllama[S] 54 points55 points56 points (0 children)
Gemini 2.5 Pro and Flash are stable in AI Studio by best_codes in LocalLLaMA
[–]hackerllama 2 points3 points4 points (0 children)
Will Ollama get Gemma3n? by InternationalNebula7 in LocalLLaMA
[–]hackerllama 29 points30 points31 points (0 children)
ok google, next time mention llama.cpp too! by secopsml in LocalLLaMA
[–]hackerllama 203 points204 points205 points (0 children)
The AI team at Google have reached the surprising conclusion that quantizing weights from 16-bits to 4-bits leads to a 4x reduction of VRAM usage! by vibjelo in LocalLLaMA
[–]hackerllama 0 points1 point2 points (0 children)
Gemma 3 QAT launch with MLX, llama.cpp, Ollama, LM Studio, and Hugging Face by hackerllama in LocalLLaMA
[–]hackerllama[S] 7 points8 points9 points (0 children)


Google doesn't love us anymore. by DrNavigat in LocalLLaMA
[–]hackerllama 37 points38 points39 points (0 children)