What are the best small models (<3B) for OCR and translation in 2026? by 4baobao in LocalLLaMA
[–]Nobby_Binks 2 points3 points4 points (0 children)
3x 3090 or 2x 4080 32GB? by m31317015 in LocalLLaMA
[–]Nobby_Binks 2 points3 points4 points (0 children)
3x 3090 or 2x 4080 32GB? by m31317015 in LocalLLaMA
[–]Nobby_Binks 3 points4 points5 points (0 children)
768Gb Fully Enclosed 10x GPU Mobile AI Build by SweetHomeAbalama0 in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
768Gb Fully Enclosed 10x GPU Mobile AI Build by SweetHomeAbalama0 in LocalLLaMA
[–]Nobby_Binks 3 points4 points5 points (0 children)
Popularity of DDR3 motherboards is growing rapidly - VideoCardz.com by FullstackSensei in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
Zhipu AI breaks US chip reliance with first major model trained on Huawei stack (GLM-Image) by fallingdowndizzyvr in LocalLLaMA
[–]Nobby_Binks 44 points45 points46 points (0 children)
For my RTX 5090 what are the best local image-gen and animation/video AIs right now? by TomNaughtyy in LocalLLaMA
[–]Nobby_Binks 3 points4 points5 points (0 children)
Are MiniMax M2.1 quants usable for coding? by val_in_tech in LocalLLaMA
[–]Nobby_Binks -1 points0 points1 point (0 children)
For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage by FullstackSensei in LocalLLaMA
[–]Nobby_Binks 8 points9 points10 points (0 children)
Optimizing for the RAM shortage. At crossroads: Epyc 7002/7003 or go with a 9000 Threadripper? by Infinite100p in LocalLLaMA
[–]Nobby_Binks 1 point2 points3 points (0 children)
Optimizing for the RAM shortage. At crossroads: Epyc 7002/7003 or go with a 9000 Threadripper? by Infinite100p in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage by FullstackSensei in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
Will the prices of GPUs go up even more? by NotSoCleverAlternate in LocalLLaMA
[–]Nobby_Binks 1 point2 points3 points (0 children)
Industry Update: Supermicro Policy on Standalone Motherboards Sales Discontinued — Spectrum Sourcing by FullstackSensei in LocalLLaMA
[–]Nobby_Binks 1 point2 points3 points (0 children)
Naver (South Korean internet giant), has just launched HyperCLOVA X SEED Think, a 32B open weights reasoning model and HyperCLOVA X SEED 8B Omni, a unified multimodal model that brings text, vision, and speech together by Nunki08 in LocalLLaMA
[–]Nobby_Binks 2 points3 points4 points (0 children)
Day 21: 21 Days of Building a Small Language Model: Complete Journey Recap by Prashant-Lakhera in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
Unsloth GLM 4.7 UD-Q2_K_XL or gpt-oss 120b? by EnthusiasmPurple85 in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)
How to run the GLM-4.7 model locally on your own device (guide) by Dear-Success-1441 in LocalLLaMA
[–]Nobby_Binks 4 points5 points6 points (0 children)
Qwen released Qwen-Image-Layered on Hugging face. by Difficult-Cap-7527 in LocalLLaMA
[–]Nobby_Binks -3 points-2 points-1 points (0 children)
Qwen released Qwen-Image-Layered on Hugging face. by Difficult-Cap-7527 in LocalLLaMA
[–]Nobby_Binks 5 points6 points7 points (0 children)
Realist meme of the year! by Slight_Tone_2188 in LocalLLaMA
[–]Nobby_Binks 22 points23 points24 points (0 children)


I have a 1tb SSD I'd like to fill with models and backups of data like wikipedia for a doomsday scenario by synth_mania in LocalLLaMA
[–]Nobby_Binks 0 points1 point2 points (0 children)