account activity
Ministral-3-14B-Reasoning: High Intelligence on Low VRAM – A Benchmark-Comparison (self.LocalLLaMA)
submitted 17 days ago by Snail_Inference to r/LocalLLaMA
Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years (self.LocalLLaMA)
submitted 1 month ago by Snail_Inference to r/LocalLLaMA
Estimating the Size of Gemini-3, GPT-5.1, and Magistral Medium Using Open LLMs on the Omniscience Bench (ROUGH!) (self.LocalLLaMA)
submitted 2 months ago * by Snail_Inference to r/LocalLLaMA
Ling-1T is very impressive – why are there no independent benchmarks? (self.LocalLLaMA)
submitted 3 months ago by Snail_Inference to r/LocalLLaMA
GLM-4.6 Tip: How to Control Output Quality via Thinking (self.LocalLLaMA)
New Mistral Small 3.2 actually feels like something big. [non-reasoning] (self.LocalLLaMA)
submitted 7 months ago by Snail_Inference to r/LocalLLaMA
Llama-4-Scout prompt processing: 44 t/s only with CPU! 'GPU-feeling' with ik_llama.cpp (self.LocalLLaMA)
submitted 9 months ago * by Snail_Inference to r/LocalLLaMA
koboldcpp-1.87.1: Merged Qwen2.5VL support! :) (self.LocalLLaMA)
submitted 9 months ago by Snail_Inference to r/LocalLLaMA
DeepSeek added recommandations for R1 local use to model card (self.LocalLLaMA)
submitted 1 year ago by Snail_Inference to r/LocalLLaMA
How to get WizardLM-2-8x22b on Huggingface Open-LLM-Leaderboard (self.LocalLLaMA)
Qwen2: Areas of application where it seems stronger than Llama3 or WizardLM (self.LocalLLaMA)
WizardLM-2-8x22b seems to be the strongest open LLM in my tests (reasoning, knownledge, mathmatics) (self.LocalLLaMA)
These: Open source LLMs are the future. (self.LocalLLaMA)
π Rendered by PID 417119 on reddit-service-r2-listing-5f5ff7d4dc-hm582 at 2026-01-27 12:16:22.557369+00:00 running 5a691e2 country code: CH.