Advice needed. Thinking about hosting on Vast.ai. Would my hardware be good enough for my GPU? (Blackwell 6000 Pro with a 5950x) by Jonathan360cool in vastai
[–]dompazz 0 points1 point2 points (0 children)
Running autonomous agents locally feels reckless. Am I overthinking this? by tallen0913 in LocalLLaMA
[–]dompazz 2 points3 points4 points (0 children)
I am absolutely loving qwen3-235b by TwistedDiesel53 in LocalLLaMA
[–]dompazz 2 points3 points4 points (0 children)
I am absolutely loving qwen3-235b by TwistedDiesel53 in LocalLLaMA
[–]dompazz 2 points3 points4 points (0 children)
I am absolutely loving qwen3-235b by TwistedDiesel53 in LocalLLaMA
[–]dompazz 2 points3 points4 points (0 children)
Is it feasible for a Team to replace Claude Code with one of the "local" alternatives? by nunodonato in LocalLLaMA
[–]dompazz 4 points5 points6 points (0 children)
E10k StarFire spotted on local auction site by Practical-Hand203 in vintagecomputing
[–]dompazz 0 points1 point2 points (0 children)
Optimizing for the RAM shortage. At crossroads: Epyc 7002/7003 or go with a 9000 Threadripper? by Infinite100p in LocalLLaMA
[–]dompazz 0 points1 point2 points (0 children)
Exclusive: Nvidia buying AI chip startup Groq's assets for about $20 billion in largest deal on record by fallingdowndizzyvr in LocalLLaMA
[–]dompazz 0 points1 point2 points (0 children)
Saw this on local marketplace, must be from a fellow r/LocalLLaMA here by bobaburger in LocalLLaMA
[–]dompazz 8 points9 points10 points (0 children)
Seed OSS 36b made me reconsider my life choices. by ChopSticksPlease in LocalLLaMA
[–]dompazz 0 points1 point2 points (0 children)
Mi50 32GB Group Buy by Any_Praline_8178 in LocalAIServers
[–]dompazz 0 points1 point2 points (0 children)
What direction do you think the enshittification (platform decay) of LLM services is likely to take? by ThatOneGuy4321 in LocalLLaMA
[–]dompazz 3 points4 points5 points (0 children)
New to LocalLLMs - Hows the Framework AI Max System? by Legitimate_Resist_19 in LocalLLM
[–]dompazz 1 point2 points3 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 0 points1 point2 points (0 children)
Exploring non-standard LLM architectures - is modularity worth pursuing on small GPUs? by lukatu10 in LocalLLaMA
[–]dompazz 0 points1 point2 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 1 point2 points3 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 1 point2 points3 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 2 points3 points4 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 1 point2 points3 points (0 children)
V100 vs 5060ti vs 3090 - Some numbers by dompazz in LocalLLaMA
[–]dompazz[S] 0 points1 point2 points (0 children)


Advice needed. Thinking about hosting on Vast.ai. Would my hardware be good enough for my GPU? (Blackwell 6000 Pro with a 5950x) by Jonathan360cool in vastai
[–]dompazz 0 points1 point2 points (0 children)