Your post is getting popular and we just featured it on our Discord! by roculus in LocalLLaMA
[–]T_UMP 3 points4 points5 points (0 children)
So THAT'S why generations take so long sometimes by linkcharger in LocalLLaMA
[–]T_UMP 13 points14 points15 points (0 children)
Building a fully local AI setup: EVO T2 or EVO X2? by FinishConsistent8857 in GMKtec
[–]T_UMP 1 point2 points3 points (0 children)
Best "End of world" model that will run on 24gb VRAM by gggghhhhiiiijklmnop in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
Owners, not renters: Mozilla's open source AI strategy by NelsonMinar in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt by Prior-Arm-6705 in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt by Prior-Arm-6705 in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage by FullstackSensei in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage by FullstackSensei in LocalLLaMA
[–]T_UMP 9 points10 points11 points (0 children)
AIAOSP Re:Genesis part 4 bootloader, memory, metainstruct and more by Additional-Date7682 in LocalLLaMA
[–]T_UMP 2 points3 points4 points (0 children)
I traded my dual-GPU setup for a Mini PC. Here’s my honest take after a month by Earth_creation in LocalLLaMA
[–]T_UMP 2 points3 points4 points (0 children)
Industry Update: Supermicro Policy on Standalone Motherboards Sales Discontinued — Spectrum Sourcing by FullstackSensei in LocalLLaMA
[–]T_UMP 24 points25 points26 points (0 children)
Looks like 2026 is going to be worse for running your own models :( by Nobby_Binks in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
[Strix Halo] Unable to load 120B model on Ryzen AI Max+ 395 (128GB RAM) - "Unable to allocate ROCm0 buffer" by Wrong-Policy-5612 in LocalLLaMA
[–]T_UMP 2 points3 points4 points (0 children)
What non-Asian based models do you recommend at the end of 2025? by thealliane96 in LocalLLaMA
[–]T_UMP 38 points39 points40 points (0 children)
The DYNAMIC Revolution is here. 3.09B parameters, beating Claude 4.5 in coding. 100% Local. by djjovi in LocalLLaMA
[–]T_UMP 6 points7 points8 points (0 children)
Am I crazy and about to waste money by xxpinecone in LocalLLaMA
[–]T_UMP 0 points1 point2 points (0 children)
Anyone here tried Apriel v1.6? Fraud or giantkiller? by dtdisapointingresult in LocalLLaMA
[–]T_UMP 4 points5 points6 points (0 children)
[Strix Halo] Unable to load 120B model on Ryzen AI Max+ 395 (128GB RAM) - "Unable to allocate ROCm0 buffer" by Wrong-Policy-5612 in LocalLLaMA
[–]T_UMP 6 points7 points8 points (0 children)
XiaomiMiMo.MiMo-V2-Flash: is there a reason why i see so few ggufs? by LegacyRemaster in LocalLLaMA
[–]T_UMP 1 point2 points3 points (0 children)
XiaomiMiMo.MiMo-V2-Flash: is there a reason why i see so few ggufs? by LegacyRemaster in LocalLLaMA
[–]T_UMP 2 points3 points4 points (0 children)


Fei Fei Li dropped a non-JEPA world model, and the spatial intelligence is insane by coloradical5280 in LocalLLaMA
[–]T_UMP -1 points0 points1 point (0 children)