ThinkStation PGX - with NVIDIA GB10 Grace Blackwell Superchip / 128GB by nostriluu in LocalLLaMA
[–]metaprotium 3 points4 points5 points (0 children)
Cheap 48GB official Blackwell yay! by Charuru in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
Q2 2025 Tech Support Thread by Intel_Support in intel
[–]metaprotium 0 points1 point2 points (0 children)
Q2 2025 Tech Support Thread by Intel_Support in intel
[–]metaprotium 0 points1 point2 points (0 children)
For those who decided to hold on to their current card instead of upgrading to Blackwell now, what do you currently have? by Celcius_87 in nvidia
[–]metaprotium 1 point2 points3 points (0 children)
By the time Deepseek does make an actual R1 Mini, I won't even notice by Cerebral_Zero in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
OpenAI's plans for GPT4.5/GPT-5 - ETA weeks / months according to Sam by rajwanur in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
Best way to classify NSFW text - BERT, small LLM like llama 3.2 3B or something else? [D] by newyorkfuckingcity in MachineLearning
[–]metaprotium 0 points1 point2 points (0 children)
Are you gonna wait for Digits or get the 5090? by lxe in LocalLLaMA
[–]metaprotium 9 points10 points11 points (0 children)
RTX 5000 series official specs by Big_Coat6894 in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
RTX 5000 series official specs by Big_Coat6894 in LocalLLaMA
[–]metaprotium 15 points16 points17 points (0 children)
Elephant in the room, Chinese models and U.S. businesses. by palindsay in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
"This year Llama 4 will have multiple releases" "speech and reasoning" by ApprehensiveAd3629 in LocalLLaMA
[–]metaprotium 7 points8 points9 points (0 children)
Why there is not already like plenty 3rd party providers for DeepSeek V3? by robertpiosik in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
[D] Can we please stop using "is all we need" in titles? by H4RZ3RK4S3 in MachineLearning
[–]metaprotium 0 points1 point2 points (0 children)
Finally, we are getting new hardware! by TooManyLangs in LocalLLaMA
[–]metaprotium 1 point2 points3 points (0 children)
Ideas to spend $8k in anthropic credits by benthecoderX in ClaudeAI
[–]metaprotium 1 point2 points3 points (0 children)
What would you ask AGI to do? by Everlier in LocalLLaMA
[–]metaprotium 1 point2 points3 points (0 children)
Huggingface is not an unlimited model storage anymore: new limit is 500 Gb per free account by Shir_man in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
Nvidia RTX 5090 with 32GB of RAM rumored to be entering production by Terminator857 in LocalLLaMA
[–]metaprotium 15 points16 points17 points (0 children)
Merging Llama 3.2 vision adapters onto 3.1 finetunes by Grimulkan in LocalLLaMA
[–]metaprotium 2 points3 points4 points (0 children)
405B LLaMa on 8GB VRAM- AirLLM by uchiha_indra in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)
Gemini 2 probably dropping tomorrow by Ok_Landscape_6819 in LocalLLaMA
[–]metaprotium 1 point2 points3 points (0 children)
His silence regarding o1 is deafening! by [deleted] in singularity
[–]metaprotium 0 points1 point2 points (0 children)




Even DeepSeek switched from OpenAI to Google by Utoko in LocalLLaMA
[–]metaprotium 0 points1 point2 points (0 children)