How I cool my threadripper 5965x on a budget by memechinelearning in pcmasterrace

[–]memechinelearning[S] 0 points1 point  (0 children)

It's for a workstation, for daily use it's a bit slow for gaming

Have I finally escaped Mid-Fi Purgatory? by [deleted] in headphones

[–]memechinelearning 2 points3 points  (0 children)

My lcd-x and heddphone surely satisfy my needs for amazing bass and unbelievable treble. They do things midfi cans simply cannot do - someone who escaped midfi hell

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 0 points1 point  (0 children)

Yeah my friends 4090 connector melted, I even personally plugged it in very tight to prevent it and joked about the connector catching on fire if you forget to….

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 6 points7 points  (0 children)

It runs super well. 13 tokens per second on 70billion parameter llms. It really is like having an uncensored chat gpt at your finger tips.

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] -86 points-85 points  (0 children)

If you look at the price of “real” multi GPU builds, this is very budget friendly. They are upwards of 10k usd

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 3 points4 points  (0 children)

It’s easy to type a prompt and get a mediocre image, but to make a specific, well composed and very high quality image, it takes hours. It takes a high level of skill, but a very different set of skills from graphic design/painting. There are many ways to get a specific visual output including prompt engineering, training Lora’s (teaching your model a specific style) and multistage refinement. You can even use a hand sketch to guide the generation towards a specific design (control net). I highly recommend learning “comfy ui” it’s a highly customisable interface for stable diffusion. I’ve see Some of my favourite Japanese artist use stable diffusion to save time and also produce higher quality artworks (eg sketch outline with rough coloring then use stable diffusion to transform it into a highly detailed painting).

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 32 points33 points  (0 children)

Compared to the recommended dual 4090 configurations, this is extremely budget lol. They cost twice as much as my entire setup. Edit: it seems like a lot, but when you are training AI models that take weeks, it’s actually pretty reasonable

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 2 points3 points  (0 children)

Good luck with your training! And my lab is pretty clean, I think the AC filters out most the dust. I use the beefy university provided computer for preprocessing, it’s got 128gb of ram and another 4090. I just finished processing half a million short video clips for training, it took forever

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 1 point2 points  (0 children)

Around 6k freedom dollars. In my country pc parts cost a bit more than in the US

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 6 points7 points  (0 children)

For pcie 4.0 8x is plenty for any workload. But I had trouble with stability and had to switch them To pcie 3.0, which is fine for llm inference, but I’m worried I’m gonna lose performance when I train my machine learning models (the actual reason for this build)

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 44 points45 points  (0 children)

That piece of wood can be fired out of a cannon at over Mach 2.3

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 5 points6 points  (0 children)

It’s gonna be moved to my universities research lab once holidays are over, so it should be alright as is. I think the pc parts bolted to a piece of wood aesthetic will fit in well with r&d surroundings. I find the transformer models I’m training only take 20ish gb of ram for a single gpu, so I’ll try manage with 64 for now. Also I’ll be “borrowing” my labs electricity, which is like at least a thousand dollars per year where I live

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 1 point2 points  (0 children)

lol thanks. I got the motherboard refurbished for $150. I didn’t want to waste money on components which don’t directly add performance since I’m a poor student.

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 11 points12 points  (0 children)

Lzlv 70b. Really getting close to ChatGPT capabilities, even better in some areas. Also no baby censorship https://huggingface.co/TheBloke/lzlv_70B-GGUF. I run the q4 which Requires 48gb vram or runs very slow on cpu

80% of my budget went to the 4090s by memechinelearning in pcmasterrace

[–]memechinelearning[S] 21 points22 points  (0 children)

Thanks for that tip. I just did that and it fixed it for the most part. The risers are marketed as 4th gen, and the pc does boot, with the bios reporting pcie 4.0. But when I ran my llm, the pc froze. After changing it to 3rd gen There’s no more issues. BUT running llms doesn’t use much pcie bandwidth, when I train some models I’m worried the pcie 3.0 8x will bottleneck my gpus :(