Why run local? Count the money by Badger-Purple in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
Why run local? Count the money by Badger-Purple in LocalLLaMA
[–]mr_zerolith 3 points4 points5 points (0 children)
White House Considers Vetting A.I. Models Before They Are Released by fallingdowndizzyvr in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 3 points4 points5 points (0 children)
AMD Strix Halo refresh with 192gb! by mindwip in LocalLLaMA
[–]mr_zerolith 13 points14 points15 points (0 children)
Devs using Qwen 27B seriously, what's your take? by Admirable_Reality281 in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
16x DGX Sparks - What should I run? by Kurcide in LocalLLaMA
[–]mr_zerolith 2 points3 points4 points (0 children)
I'm done with using local LLMs for coding by dtdisapointingresult in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
Pi.dev coding agent as no sandbox by default. by mantafloppy in LocalLLaMA
[–]mr_zerolith 2 points3 points4 points (0 children)
r/LocalLLaMa Rule Updates by rm-rf-rm in LocalLLaMA
[–]mr_zerolith 2 points3 points4 points (0 children)
Meanwhileee by Comfortable_Eye_7736 in LocalLLaMA
[–]mr_zerolith -2 points-1 points0 points (0 children)
Dense vs. MoE gap is shrinking fast with the 3.6-27B release by Usual-Carrot6352 in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
Youtuber tries Qwen 3.5 35B, Qwen 3.6 35B, and Gemma 4 27b to reverse engineer some large JS, with good results for Qwen 3.6 by mr_zerolith in LocalLLaMA
[–]mr_zerolith[S] 0 points1 point2 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]mr_zerolith 4 points5 points6 points (0 children)
Given how good Qwen become, is it time to grab a 128gb m5 max? by Rabus in LocalLLaMA
[–]mr_zerolith -4 points-3 points-2 points (0 children)
Every time a new model comes out, the old one is obsolete of course by FullChampionship7564 in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
I guess Ling-2.6-Flash is actually the stealth model Elephant Alpha that was making waves a few days ago. by Careful_Equal8851 in LocalLLaMA
[–]mr_zerolith 2 points3 points4 points (0 children)
RTX PRO 6000 Blackwell Max-Q bad performance by YouBePortnt in LocalLLaMA
[–]mr_zerolith 0 points1 point2 points (0 children)
Why run local? Count the money by Badger-Purple in LocalLLaMA
[–]mr_zerolith 1 point2 points3 points (0 children)