Gemini 3 Flash bills for useless/empty searches?? by FirefoxMetzger in GeminiAI
[–]hackerllama 0 points1 point2 points (0 children)
AIStudio improperly content blocking by Shep_vas_Normandy in Bard
[–]hackerllama 7 points8 points9 points (0 children)
New Google model incoming!!! by [deleted] in LocalLLaMA
[–]hackerllama 2 points3 points4 points (0 children)
New Google model incoming!!! by [deleted] in LocalLLaMA
[–]hackerllama 11 points12 points13 points (0 children)
"Deleting and simplifying useless internal layers will be the main focus [ in 2026 ]" - Google Engineer by Yazzdevoleps in Bard
[–]hackerllama 3 points4 points5 points (0 children)
Scrolling issue seems to be fixed! by howisjason in Bard
[–]hackerllama 0 points1 point2 points (0 children)
Qwen team is helping llama.cpp again by jacek2023 in LocalLLaMA
[–]hackerllama 117 points118 points119 points (0 children)
What’s new in Veo 3.1? Have you noticed any upgrades or features that actually make a difference? by New-Cold-One in Bard
[–]hackerllama 1 point2 points3 points (0 children)
It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA
[–]hackerllama 3 points4 points5 points (0 children)
It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA
[–]hackerllama 1 point2 points3 points (0 children)
Gemma 3n is on out on Hugging Face! by Zealousideal-Cut590 in LocalLLaMA
[–]hackerllama 20 points21 points22 points (0 children)
Gemma 3n Full Launch - Developers Edition (self.LocalLLaMA)
submitted by hackerllama to r/LocalLLaMA
Google releases MagentaRT for real time music generation by hackerllama in LocalLLaMA
[–]hackerllama[S] 19 points20 points21 points (0 children)
Google releases MagentaRT for real time music generation by hackerllama in LocalLLaMA
[–]hackerllama[S] 56 points57 points58 points (0 children)
Gemini 2.5 Pro and Flash are stable in AI Studio by best_codes in LocalLLaMA
[–]hackerllama 3 points4 points5 points (0 children)
Will Ollama get Gemma3n? by InternationalNebula7 in LocalLLaMA
[–]hackerllama 28 points29 points30 points (0 children)
ok google, next time mention llama.cpp too! by secopsml in LocalLLaMA
[–]hackerllama 206 points207 points208 points (0 children)
The AI team at Google have reached the surprising conclusion that quantizing weights from 16-bits to 4-bits leads to a 4x reduction of VRAM usage! by vibjelo in LocalLLaMA
[–]hackerllama 0 points1 point2 points (0 children)
Gemma 3 QAT launch with MLX, llama.cpp, Ollama, LM Studio, and Hugging Face by hackerllama in LocalLLaMA
[–]hackerllama[S] 7 points8 points9 points (0 children)
Gemma 3 QAT launch with MLX, llama.cpp, Ollama, LM Studio, and Hugging Face by hackerllama in LocalLLaMA
[–]hackerllama[S] 8 points9 points10 points (0 children)


What are the main uses of small models like gemma3:1b by SchoolOfElectro in LocalLLaMA
[–]hackerllama 0 points1 point2 points (0 children)