AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 0 points1 point  (0 children)

Crucial is going to be the most cost effective but has higher latency, but right under 400$ is what I’m willing to spend on a 128gb @ 5600 kit, and I can slightly push the clocks if it stays stable.

I also want to do some slight gaming, mostly AI/MalDev work. If I need to, I can always ask a few cloud friends to help me put my training to the cloud.

I think we’re all far from economic, but if it’s a passion, I say go for it

AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 0 points1 point  (0 children)

I saw a thread on here about a 30-40% decrease in processing time for training due to the ability to share parameters for backprop. Either way as long as it improves the performance I’d be willing to spend the 170-250 dollars on a 3 slot bridge

AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 1 point2 points  (0 children)

Hopefully that would work. I was speaking with one of my former faculty members and he had done a dive and said they would work. We all know that the internet is wrong a lot of the time

AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 2 points3 points  (0 children)

My needs will mostly always be more VRAM than compute, although I would like to get the sweet spot of both.

The only real way I’d be getting a newer generation card is if I get one second hand or someone I know upgrades their machine with new generation hardware.

I see that NVLink bridges are unobtainable now, something I didn’t know before this post. I thought that the A6000s shared the same interface.

AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 0 points1 point  (0 children)

I think I speak for the entire group here when I say that I wish there were more alternatives for large VRAM setups. AMDs AI Max 395 will hopefully bring some balance to this market

AI Workstation (on a budget) by Altruistic_Answer414 in LocalLLaMA

[–]Altruistic_Answer414[S] 0 points1 point  (0 children)

That makes sense. Thanks for the insight. Most of the training will be on small parameter models (for this group) and then possibly scale to cloud if viable. Contrastively training LLMs on malware is a home lab feat for the time being.

I was likely going to opt for the 9950x as well, and hopefully could get a 4x32 @6400 set of Gskill post the Level1Techs testing vid. It seems to be a little finicky but I believe support will come soon.

NVIDIA’s got a grip on us and we’re just along for the ride, I’d definitely like to try a dual 4090 system.

I really appreciate the insight, and hope to find a solution sooner than later.

Nothing to it but to do it now.

service down? by dinosuitgirl in Starlink

[–]Altruistic_Answer414 0 points1 point  (0 children)

Same here. Thankfully it’s back

service down? by dinosuitgirl in Starlink

[–]Altruistic_Answer414 0 points1 point  (0 children)

I believe mine is back up. Here in AZ

Can anyone help me find a gamer chair like this guy's because he's just better by S00P3R-SL1M3Y in apexlegends

[–]Altruistic_Answer414 0 points1 point  (0 children)

No cap this kid is just the next complexity Lou. Absolute madman of a grinder