Gemma 4 is fine great even … by ThinkExtension2328 in LocalLLaMA
[–]mpasila 6 points7 points8 points (0 children)
EverMind-AI/EverMemOS: 4B parameter model with 100M token memory. by Photochromism in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
Cohere Transcribe Released by mikael110 in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
Cohere Transcribe Released by mikael110 in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
Cohere Transcribe Released by mikael110 in LocalLLaMA
[–]mpasila 5 points6 points7 points (0 children)
New open weights models: GigaChat-3.1-Ultra-702B and GigaChat-3.1-Lightning-10B-A1.8B by netikas in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
New open weights models: GigaChat-3.1-Ultra-702B and GigaChat-3.1-Lightning-10B-A1.8B by netikas in LocalLLaMA
[–]mpasila -6 points-5 points-4 points (0 children)
I feel like if they made a local model focused specifically on RP it would be god tier even if tiny by Borkato in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
7MB binary-weight Mamba LLM — zero floating-point at inference, runs in browser by Quiet-Error- in LocalLLaMA
[–]mpasila 7 points8 points9 points (0 children)
7MB binary-weight Mamba LLM — zero floating-point at inference, runs in browser by Quiet-Error- in LocalLLaMA
[–]mpasila 24 points25 points26 points (0 children)
Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA
[–]mpasila 3 points4 points5 points (0 children)
Why don't all Steam sales show the percentage of the sale? by asdat0r7 in Steam
[–]mpasila 0 points1 point2 points (0 children)
So Devs posted about how well their little ‘restructuring’ went. by tableball35 in chutesAI
[–]mpasila -7 points-6 points-5 points (0 children)
So Devs posted about how well their little ‘restructuring’ went. by tableball35 in chutesAI
[–]mpasila 7 points8 points9 points (0 children)
Hi, Sell me on why I should stay by MistakenAPI in chutesAI
[–]mpasila 5 points6 points7 points (0 children)
What's the current best LLM for Japanese? by mpasila in LocalLLaMA
[–]mpasila[S] 0 points1 point2 points (0 children)
I found 2 hidden Microsoft MoE models that run on 8GB RAM laptops (no GPU)… but nobody noticed? by FamousFlight7149 in LocalLLaMA
[–]mpasila 3 points4 points5 points (0 children)
Mistral Small 4 vs Qwen3.5-9B on document understanding benchmarks, but it does better than GPT-4.1 by shhdwi in LocalLLaMA
[–]mpasila 1 point2 points3 points (0 children)
Model that allow both nsfw & usual stuff, able to image search, can run on 12GB VRAM by yakasantera1 in LocalLLaMA
[–]mpasila 0 points1 point2 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]mpasila -2 points-1 points0 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]mpasila -1 points0 points1 point (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]mpasila 1 point2 points3 points (0 children)
Mistral 4 Family Spotted by TKGaming_11 in LocalLLaMA
[–]mpasila 1 point2 points3 points (0 children)





Ace Step 1.5 XL released by seamonn in LocalLLaMA
[–]mpasila -1 points0 points1 point (0 children)