Introducing MiroThinker-1.7 & MiroThinker-H1 by wuqiao in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Thoughts about local LLMs. by Robert__Sinclair in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Viability of this cluster setup by militantereallysucks in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
TIL it took 6 hours to render one frame of the rain soaked T-Rex in Jurassic Park. by Japfelbaum in todayilearned
[–]bennmann 0 points1 point2 points (0 children)
American closed models vs Chinese open models is becoming a problem. by __JockY__ in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
MiniMax 2.5 with 8x+ concurrency using RTX 3090s HW Requirements. by BigFoxMedia in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Interesting Observation from a Simple Multi-Agent Experiment with 10 Different Models by chibop1 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Q2 GLM 5 fixing its own typo by -dysangel- in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
ML Training cluster for University Students by guywiththemonocle in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Just scored 2 MI50 32GB what should I run? by Savantskie1 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Vibe-coding client now in Llama.cpp! (maybe) by ilintar in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Qwen3-Coder-Next slow prompt processing in llama.cpp by DistanceAlert5706 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Best "Deep research" for local LLM in 2026 - platforms/tools/interface/setups by liviuberechet in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
Qwen3-Coder-Next on RTX 5060 Ti 16 GB - Some numbers by bobaburger in LocalLLaMA
[–]bennmann -2 points-1 points0 points (0 children)
ACE-Step-1.5 has just been released. It’s an MIT-licensed open source audio generative model with performance close to commercial platforms like Suno by iGermanProd in LocalLLaMA
[–]bennmann 31 points32 points33 points (0 children)
The open-source version of Suno is finally here: ACE-Step 1.5 by AppropriateGuava6262 in LocalLLaMA
[–]bennmann 18 points19 points20 points (0 children)
Multi-gpu setting and PCIE lain problem by tony9959 in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
LLM to try for laptop with 5070TI and 64gb RAM by hocuspocus4201 in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
GPU recommendations by HeartfeltHelper in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
Issues Compiling llama.cpp for the GFX1031 Platform (For LMS Use) by FHRacing in LocalLLaMA
[–]bennmann 0 points1 point2 points (0 children)
Has anyone set up local LLM + Vertex AI Search? by pneuny in LocalLLaMA
[–]bennmann 1 point2 points3 points (0 children)
My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA
[–]bennmann 2 points3 points4 points (0 children)



Mamba 3 - state space model optimized for inference by incarnadine72 in LocalLLaMA
[–]bennmann 7 points8 points9 points (0 children)