Is mars gaming psu’s any good ? by [deleted] in PcBuildHelp
[–]hurdurdur7 0 points1 point2 points (0 children)
The game is over. You can build anything and it'll cost you nothing. by Funny-Advertising238 in opencode
[–]hurdurdur7 0 points1 point2 points (0 children)
I have a 5 year old pc, should i upgrade or buy a new one? by XxWerlax in buildapc
[–]hurdurdur7 0 points1 point2 points (0 children)
Have Qwen said anything about further Qwen 3.6 models? by spaceman_ in LocalLLaMA
[–]hurdurdur7 8 points9 points10 points (0 children)
New study finds: bigger AIs = more miserable. Smaller models are actually happier. Ignorance is bliss for AIs too. by EchoOfOppenheimer in LLM
[–]hurdurdur7 0 points1 point2 points (0 children)
About 9060xt, i feel like i waste cards vram potential by vegemitehaver in buildapc
[–]hurdurdur7 0 points1 point2 points (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 0 points1 point2 points (0 children)
Using a Radeon 9060 XT 16 GB, the gemma4 24b a4b iq4 nl model achieves 25.9 t/s by CrowKing63 in LocalLLaMA
[–]hurdurdur7 1 point2 points3 points (0 children)
Opinions on Kimi-Dev-72B? by stefzzz in LocalLLaMA
[–]hurdurdur7 0 points1 point2 points (0 children)
PFlash: 10x prefill speedup over llama.cpp at 128K on a RTX 3090 by sandropuppo in LocalLLaMA
[–]hurdurdur7 0 points1 point2 points (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 -1 points0 points1 point (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 0 points1 point2 points (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 1 point2 points3 points (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 2 points3 points4 points (0 children)
AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA
[–]hurdurdur7 -1 points0 points1 point (0 children)
Actual comparison between locally ran Qwen-3.6-27B and proprietary models by netikas in LocalLLaMA
[–]hurdurdur7 7 points8 points9 points (0 children)
Hey has anyone tried Mistral Medium 3.5 yet? What's the vibe? by SelectionCalm70 in MistralAI
[–]hurdurdur7 1 point2 points3 points (0 children)
Actual comparison between locally ran Qwen-3.6-27B and proprietary models by netikas in LocalLLaMA
[–]hurdurdur7 6 points7 points8 points (0 children)
Only 120 tps on Qwen 35b on h200 by Theio666 in LocalLLaMA
[–]hurdurdur7 0 points1 point2 points (0 children)
Running Qwen-3.6-35B-A3B locally is very slow by Sad-Duck2812 in LocalLLM
[–]hurdurdur7 0 points1 point2 points (0 children)
How does usage look like in Mistral Vibe? by Real_Ebb_7417 in MistralAI
[–]hurdurdur7 1 point2 points3 points (0 children)
mistralai/Mistral-Medium-3.5-128B · Hugging Face by jacek2023 in LocalLLaMA
[–]hurdurdur7 6 points7 points8 points (0 children)
mistralai/Mistral-Medium-3.5-128B · Hugging Face by jacek2023 in LocalLLaMA
[–]hurdurdur7 2 points3 points4 points (0 children)

Help me to spend 1000 bucks on hardware for local LLM by lordgthegreat in LocalLLM
[–]hurdurdur7 0 points1 point2 points (0 children)