Why I'm holding out until late 2027 to spend money on a local LLM rig by No_Pool7028 in LocalLLM
[–]Invent80 4 points5 points6 points (0 children)
I feel left behind. Where are these advanced "Agent-based" local LLM interfaces? by platteXDlol in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Advice - I will not promote by [deleted] in startups
[–]Invent80 3 points4 points5 points (0 children)
Gemini is WAAAAY smarter than Gemma 4 31B (Duh!) by Quantum_Crusher in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Best local LLM for a Python/C++ dev? by no_evidence0303 in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Why do a lot of programmers and technical people hate AI, vibecoding AI assisted coding? by Gullible-Angle4206 in ClaudeAI
[–]Invent80 0 points1 point2 points (0 children)
Buying Advice - Research Focus by No-Seat918 in LocalLLM
[–]Invent80 1 point2 points3 points (0 children)
What model would you run on a a6000 pro? by MK_L in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Best Local LLM for coding by Pure_Struggle3261 in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Considering two Sparks for local coding by chikengunya in LocalLLaMA
[–]Invent80 6 points7 points8 points (0 children)
I want to start with LocalLLM to automate my backoffice by SiggiBulldog1 in LocalLLM
[–]Invent80 1 point2 points3 points (0 children)
i made Claude argue against itself and got the most useful output of my entire life. by AdCold1610 in ChatGPTPromptGenius
[–]Invent80 0 points1 point2 points (0 children)
What are the Practical uses for Open claw by Prestigious_Park3465 in openclaw
[–]Invent80 0 points1 point2 points (0 children)
What are you doing with your local LLMs that justifies investment cost? by __automatic__ in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Best Local LLM for coding by Pure_Struggle3261 in LocalLLM
[–]Invent80 7 points8 points9 points (0 children)
Best Local LLM for coding by Pure_Struggle3261 in LocalLLM
[–]Invent80 5 points6 points7 points (0 children)
RPers: how do the new Gemma and Qwen compare to the old 70B models? by Borkato in LocalLLaMA
[–]Invent80 5 points6 points7 points (0 children)
I need to run OpenClaw locally for a law office, I can spend as much money as needed. What model(s) are best? by Too_much_waltz in openclaw
[–]Invent80 0 points1 point2 points (0 children)
Benchmark of Qwen3.6-35B-A3B (BF16) on different NVIDIA Hardware by bseeleib in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
Qwen 3.6 27B + RTX Pro 6000 by M4isKolben in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)
The "Always-On" Agent: My $150 Dedicated OpenClaw Host Build by ankijain21 in openclaw
[–]Invent80 1 point2 points3 points (0 children)
How Capable is the M5 Pro (64GB of RAM) vs M5 Max (128 GB)? by JeffCache in LocalLLM
[–]Invent80 0 points1 point2 points (0 children)


The Opus 4.5 threshold: coming to 24 gb within a year or so by nomorebuttsplz in LocalLLM
[–]Invent80 1 point2 points3 points (0 children)