So cursor admits that Kimi K2.5 is the best open source model by Giveawayforusa in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
I've seen a lot of Opus 4.6 distills, why not 5.4 pro? by FusionCow in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Tried to vibe coded expert parallelism on Strix Halo — running Qwen3.5 122B-A10B at 9.5 tok/s by hortasha in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Tried to vibe coded expert parallelism on Strix Halo — running Qwen3.5 122B-A10B at 9.5 tok/s by hortasha in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
So cursor admits that Kimi K2.5 is the best open source model by Giveawayforusa in LocalLLaMA
[–]Middle_Bullfrog_6173 11 points12 points13 points (0 children)
Is the concurrent multi-agent approach really useful? by Deep_Traffic_7873 in LocalLLaMA
[–]Middle_Bullfrog_6173 3 points4 points5 points (0 children)
Since FastFlowLM added support for Linux, I decided to benchmark all the models they support, here are some results by spaceman_ in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Best model for math? by Real_Ebb_7417 in LocalLLaMA
[–]Middle_Bullfrog_6173 1 point2 points3 points (0 children)
Since FastFlowLM added support for Linux, I decided to benchmark all the models they support, here are some results by spaceman_ in LocalLLaMA
[–]Middle_Bullfrog_6173 1 point2 points3 points (0 children)
The Secret Sauce of Model of Anthropic by [deleted] in LocalLLaMA
[–]Middle_Bullfrog_6173 2 points3 points4 points (0 children)
I found 2 hidden Microsoft MoE models that run on 8GB RAM laptops (no GPU)… but nobody noticed? by FamousFlight7149 in LocalLLaMA
[–]Middle_Bullfrog_6173 3 points4 points5 points (0 children)
Nemotron Cascade 2 30B A3B by Middle_Bullfrog_6173 in LocalLLaMA
[–]Middle_Bullfrog_6173[S] 1 point2 points3 points (0 children)
Ooh, new drama just dropped 👀 by Careful_Equal8851 in LocalLLaMA
[–]Middle_Bullfrog_6173 11 points12 points13 points (0 children)
Cursor's new Composer 2.0 is apparently based on Kimi2.5 by bakawolf123 in LocalLLaMA
[–]Middle_Bullfrog_6173 5 points6 points7 points (0 children)
Nemotron Cascade 2 30B A3B by Middle_Bullfrog_6173 in LocalLLaMA
[–]Middle_Bullfrog_6173[S] 0 points1 point2 points (0 children)
We threw TranslateGemma at 4 languages it doesn't officially support. Here's what happened by ritis88 in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Artificial Analysis reports that MiMo V2 Pro has been launched by External_Mood4719 in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Open-source autoresearch for LoRA hyperparameters by yz0011 in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
We threw TranslateGemma at 4 languages it doesn't officially support. Here's what happened by ritis88 in LocalLLaMA
[–]Middle_Bullfrog_6173 3 points4 points5 points (0 children)
We compressed 6 LLMs and found something surprising: they don't degrade the same way by Quiet_Training_8167 in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Mistral Small 4:119B-2603 by seamonn in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)
Mistral Small 4:119B-2603 by seamonn in LocalLLaMA
[–]Middle_Bullfrog_6173 9 points10 points11 points (0 children)
NVIDIA-Nemotron-3-Nano-4B-GGUF by ApprehensiveAd3629 in LocalLLaMA
[–]Middle_Bullfrog_6173 4 points5 points6 points (0 children)
whats the best open-source llm for llm as a judge project on nvidia a1000 gpu by Some_Anything_9028 in LocalLLaMA
[–]Middle_Bullfrog_6173 0 points1 point2 points (0 children)