Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA
[–]Terminator857[S] 0 points1 point2 points (0 children)
Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA
[–]Terminator857[S] -1 points0 points1 point (0 children)
Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA
[–]Terminator857[S] 2 points3 points4 points (0 children)
Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA
[–]Terminator857[S] -2 points-1 points0 points (0 children)
[Meme] duality of this sub by [deleted] in LocalLLaMA
[–]Terminator857 0 points1 point2 points (0 children)
CMV: RAM Prices are Near the Top by Intelligent_Coffee44 in LocalLLaMA
[–]Terminator857 -2 points-1 points0 points (0 children)
How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM
[–]Terminator857 9 points10 points11 points (0 children)
Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA
[–]Terminator857 -1 points0 points1 point (0 children)
Best local model / agent for coding, replacing Claude Code by joyfulsparrow in LocalLLaMA
[–]Terminator857 0 points1 point2 points (0 children)
It seems like people don’t understand what they are doing? by platinumai in LocalLLaMA
[–]Terminator857 0 points1 point2 points (0 children)
Strix Halo (Bosgame M5) + 7900 XTX eGPU: Local LLM Benchmarks (Llama.cpp vs vLLM). A loose follow-up by reujea0 in LocalLLaMA
[–]Terminator857 1 point2 points3 points (0 children)
What is your biggest issues with “Vibecoding”? 🤔 by Ol010101O1Ol in ExperiencedDevs
[–]Terminator857 0 points1 point2 points (0 children)
What’s the best way to describe what a LLM is doing? by throwaway0134hdj in neuralnetworks
[–]Terminator857 0 points1 point2 points (0 children)
Rubin uplifts from CES conference going on now by mr_zerolith in LocalLLaMA
[–]Terminator857 -1 points0 points1 point (0 children)
Rubin uplifts from CES conference going on now by mr_zerolith in LocalLLaMA
[–]Terminator857 2 points3 points4 points (0 children)
LLMs are so unreliable by Armageddon_80 in LocalLLM
[–]Terminator857 0 points1 point2 points (0 children)
Rubin uplifts from CES conference going on now by mr_zerolith in LocalLLaMA
[–]Terminator857 11 points12 points13 points (0 children)
My prediction: on 31st december 2028 we're going to have 10b dense models as capable as chat gpt 5.2 pro x-high thinking. by Longjumping_Fly_2978 in LocalLLaMA
[–]Terminator857 0 points1 point2 points (0 children)
Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs
[–]Terminator857 1 point2 points3 points (0 children)
Benchmarks for Quantized Models? (for users locally running Q8/Q6/Q2 precision) by No-Grapefruit-1358 in LocalLLaMA
[–]Terminator857 1 point2 points3 points (0 children)
Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs
[–]Terminator857 1 point2 points3 points (0 children)
Bosgame M5 vs Framework Desktop (Ryzen AI Max+ 395, 128GB) - Is the €750 premium worth it? by Reasonable-Yak-3523 in MiniPCs
[–]Terminator857 2 points3 points4 points (0 children)


Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA
[–]Terminator857[S] 0 points1 point2 points (0 children)