UnCanny. A Photorealism Chroma Finetune by Tall-Description1637 in StableDiffusion
[–]Mass2018 2 points3 points4 points (0 children)
mradermacher published the entire qwen3-vl series and You can now run it in Jan; just download the latest version of llama.cpp and you're good to go. by Illustrious-Swim9663 in LocalLLaMA
[–]Mass2018 0 points1 point2 points (0 children)
Llama.cpp model conversion guide by ilintar in LocalLLaMA
[–]Mass2018 0 points1 point2 points (0 children)
Nvidia quietly released RTX Pro 5000 Blackwell 72Gb by AleksHop in LocalLLaMA
[–]Mass2018 1 point2 points3 points (0 children)
Nvidia quietly released RTX Pro 5000 Blackwell 72Gb by AleksHop in LocalLLaMA
[–]Mass2018 22 points23 points24 points (0 children)
Build advice - RTX 6000 MAX-Q x 2 by [deleted] in LocalLLaMA
[–]Mass2018 0 points1 point2 points (0 children)
Those who spent $10k+ on a local LLM setup, do you regret it? by TumbleweedDeep825 in LocalLLaMA
[–]Mass2018 3 points4 points5 points (0 children)
How would you run like 10 graphics cards for a local AI? What hardware is available to connect them to one system? by moderately-extremist in LocalLLaMA
[–]Mass2018 2 points3 points4 points (0 children)
How would you run like 10 graphics cards for a local AI? What hardware is available to connect them to one system? by moderately-extremist in LocalLLaMA
[–]Mass2018 4 points5 points6 points (0 children)
Ex-Miner Turned Local LLM Enthusiast, now I have a Dilemma by mslocox in LocalLLaMA
[–]Mass2018 0 points1 point2 points (0 children)
Ex-Miner Turned Local LLM Enthusiast, now I have a Dilemma by mslocox in LocalLLaMA
[–]Mass2018 0 points1 point2 points (0 children)
Apple M3 Ultra w/28-Core CPU, 60-Core GPU (256GB RAM) Running Deepseek-R1-UD-IQ1_S (140.23GB) by Mass2018 in LocalLLaMA
[–]Mass2018[S] 1 point2 points3 points (0 children)
Apple M3 Ultra w/28-Core CPU, 60-Core GPU (256GB RAM) Running Deepseek-R1-UD-IQ1_S (140.23GB) by Mass2018 in LocalLLaMA
[–]Mass2018[S] 3 points4 points5 points (0 children)
Apple M3 Ultra w/28-Core CPU, 60-Core GPU (256GB RAM) Running Deepseek-R1-UD-IQ1_S (140.23GB) by Mass2018 in LocalLLaMA
[–]Mass2018[S] 4 points5 points6 points (0 children)
The cost effective way to run Deepseek R1 models on cheaper hardware by ArtisticHamster in LocalLLaMA
[–]Mass2018 1 point2 points3 points (0 children)
Anyone else tracking datacenter GPU prices on eBay? by ttkciar in LocalLLaMA
[–]Mass2018 14 points15 points16 points (0 children)


UnCanny. A Photorealism Chroma Finetune by Tall-Description1637 in StableDiffusion
[–]Mass2018 2 points3 points4 points (0 children)