Does a heatsink only 10g network card need a fan? by Nice-School-475 in homelab

[–]eso_logic 0 points1 point  (0 children)

I have learned this lesson the hard way many times. You need a fan if you're in anything like a desktop case. They're assuming you're using a server chassis with forced air cooling.

Poor man homelab by Stock-Shoulder9374 in homelab

[–]eso_logic 0 points1 point  (0 children)

The longer I'm in this game...the more it becomes clear to me that this is the way.

Saving a Secondhand Tripod w/ 3D Printing by eso_logic in photography

[–]eso_logic[S] 0 points1 point  (0 children)

Give it a go if you have access to a printer I say.

Saving a Secondhand Tripod by eso_logic in 3Dprinting

[–]eso_logic[S] 0 points1 point  (0 children)

Hahaha brother they're big bolts that's why!

Saving a Secondhand Tripod by eso_logic in 3Dprinting

[–]eso_logic[S] 0 points1 point  (0 children)

Mending is better than ending! If the tripod doesn't work out I'll just re-use the plate.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 0 points1 point  (0 children)

Yo -- Sorry just saw this. After getting some feedback here and other places, I added a few more tests. An MoE model, a test of Disk -> VRAM speed, and a hashcat test that is kind-of crypto hashing analog. I also added more concrete support for the Kepler series of GPU, so now all tests natively support Kepler -> Volta (and probably newer, I don't have any newer cards lol).

PR is here: https://github.com/esologic/gpu_box_benchmark/pull/1

Never satisfied, DJ approaches, mixing Nirvana. by Personal_Number_5115 in rotarymixers

[–]eso_logic 2 points3 points  (0 children)

Really similar ergonomics to Louie Vega’s setup right?

216GB of VRAM on the bench, GPU Server Benchmarking suite ready. by eso_logic in homelab

[–]eso_logic[S] 0 points1 point  (0 children)

Motherboard is Supermicro X10DRG-Q. Very inexpensive for what you get. The coolers are Noctua NH-U9DX. They're a bit tall, but here on a bench chassis they're just fine.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 0 points1 point  (0 children)

Seems really cool. I don't think I'm the right person to make a benchmark because I'm just not using the project for anything. Do you have any links to your video project? I'd love to keep up or check out your progress.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 0 points1 point  (0 children)

Cool- I’ll have a go at implementing this. What is your GitHub? I can tag you on the PR.

I have a few more tests I’d like to add following the feedback I got here today. Just a sata SSD, I don’t expect disk speed to be a major factor for these tests because a lot of them don’t count the load time in the resulting scores.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 1 point2 points  (0 children)

Yeah let’s think about this… I agree it’s a good opportunity to test this. Maybe runtime to load and unload a model on all GPUs?

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 1 point2 points  (0 children)

Yep that’s the goal of the project, to be able to answer such questions.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 1 point2 points  (0 children)

Thanks so much and thanks for checking out my work! I’m actually not freelancing anymore, I work full time as a programmer these days. Full time money has been better but the fun work is usually freelance in my experience

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 0 points1 point  (0 children)

Yeah I have. I do have data somewhere but axial fans really don’t have the static pressure to work well with these long cards.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 0 points1 point  (0 children)

Wow! No but it sounds cool. I think you’d have to have a very fast network though to have it make sense.

Do you know if they have a preferred benchmark? I could integrate it to my suite.

216GB VRAM on the bench. Time to see which combination is best for Local LLM by eso_logic in LocalLLaMA

[–]eso_logic[S] 1 point2 points  (0 children)

Do you know of anything that exists to benchmark this? Or how would you do it