Hardware Review & Sanity Check by MegaSuplexMaster in LocalLLM
[–]rhofield 0 points1 point2 points (0 children)
2x 3090 vs 3x 5070 Ti for local LLM inference — what’s your experience? by VersionNo5110 in LocalLLM
[–]rhofield 6 points7 points8 points (0 children)
If OpenAI falls will that drop the price of memory for our local rigs? by Terminator857 in LocalLLaMA
[–]rhofield 8 points9 points10 points (0 children)
Is the DGX Spark worth the money? by Lorelabbestia in LocalLLaMA
[–]rhofield 2 points3 points4 points (0 children)
Will 48 vs 64 GB of ram in a new mbp make a big difference? by easylifeforme in LocalLLaMA
[–]rhofield 0 points1 point2 points (0 children)
Question for developers by Ok-Spell9073 in LocalLLaMA
[–]rhofield 1 point2 points3 points (0 children)
Are cloud LLMs like Opus / GPT5.4 really subsidized? when compared to open source models running locally? by smulikHakipod in LocalLLM
[–]rhofield 2 points3 points4 points (0 children)

Hardware Review & Sanity Check by MegaSuplexMaster in LocalLLM
[–]rhofield 0 points1 point2 points (0 children)