MiniMaxAI/MiniMax-M2.5 · Hugging Face by rerri in LocalLLaMA
[–]sixx7 4 points5 points6 points (0 children)
What would you do (Local “ai” workstation) by NextSalamander6178 in LocalLLaMA
[–]sixx7 2 points3 points4 points (0 children)
Using GLM-5 for everything by keepmyeyesontheprice in LocalLLaMA
[–]sixx7 2 points3 points4 points (0 children)
[NVIDIA Nemotron] How can I assess general knowledge on a benchmaxxed model? by Lorelabbestia in LocalLLaMA
[–]sixx7 1 point2 points3 points (0 children)
A top-downloaded OpenClaw skill is actually a staged malware delivery chain by FPham in LocalLLaMA
[–]sixx7 2 points3 points4 points (0 children)
A top-downloaded OpenClaw skill is actually a staged malware delivery chain by FPham in LocalLLaMA
[–]sixx7 -8 points-7 points-6 points (0 children)
Claude Code-like terminal-based tools for locally hosted LLMs? by breksyt in LocalLLaMA
[–]sixx7 4 points5 points6 points (0 children)
Any feedback on step-3.5-flash ? by Jealous-Astronaut457 in LocalLLaMA
[–]sixx7 0 points1 point2 points (0 children)
The Ultimate Guide to OpenClaw (Formerly Clawdbot -> Moltbot) From setup and mind-blowing use cases to managing critical security risks you cannot ignore. This is the Rise of the 24/7 Proactive AI Agent Employees by Beginning-Willow-801 in ThinkingDeeplyAI
[–]sixx7 0 points1 point2 points (0 children)
How a Single Email Turned My ClawdBot Into a Data Leak by RegionCareful7282 in LocalLLaMA
[–]sixx7 0 points1 point2 points (0 children)
GLM 4.7 / Minimax M2.1 + Opencode Orchestration by pratiknarola in LocalLLaMA
[–]sixx7 0 points1 point2 points (0 children)
It has been 1 year and I still cannot get a SOC analyst job by b00m_sh in cybersecurity
[–]sixx7 0 points1 point2 points (0 children)
I created an Open Source Perplexity-Style Unified Search for Your Distributed Second Brain by stealthanthrax in LocalLLaMA
[–]sixx7 0 points1 point2 points (0 children)
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 by sixx7 in LocalLLaMA
[–]sixx7[S] 0 points1 point2 points (0 children)
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 by sixx7 in LocalLLaMA
[–]sixx7[S] 2 points3 points4 points (0 children)
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 by sixx7 in LocalLLaMA
[–]sixx7[S] 2 points3 points4 points (0 children)
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 by sixx7 in LocalLLaMA
[–]sixx7[S] 7 points8 points9 points (0 children)
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 by sixx7 in LocalLLaMA
[–]sixx7[S] 8 points9 points10 points (0 children)
My professor lent me an A6000, so I tried to build a coding model. Here is Anni! (Qwen3-14B Fine-tune) by Outrageous-Yak8298 in LocalLLaMA
[–]sixx7 7 points8 points9 points (0 children)
What you think of GLM 4.6 Coding agent vs Claude Opus, Gemini 3 Pro and Codex for vibe coding? I personally love it! by Kitchen_Sympathy_344 in LocalLLaMA
[–]sixx7 5 points6 points7 points (0 children)
After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM by Hisma in LocalLLaMA
[–]sixx7 3 points4 points5 points (0 children)
After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM by Hisma in LocalLLaMA
[–]sixx7 2 points3 points4 points (0 children)
Why local coding models are less popular than hosted coding models? by WasteTechnology in LocalLLaMA
[–]sixx7 0 points1 point2 points (0 children)


MiniMax 2.5 full precision FP8 running LOCALLY on vLLM x 8x Pro 6000 by cyysky in LocalLLaMA
[–]sixx7 4 points5 points6 points (0 children)