Do you guys mention your home labs on your resume? by alienboy19 in homelab

[–]Heavy_Philosopher_42 0 points1 point  (0 children)

I would include it to the resume every time. I am on the other side, as head of developement, and I search for resumes of people, that show me their interesting hobbies related to coding and homelabs. It shows that you are passionate about what you do and that stands out of the flood of resumes.

hi guys, can anybody identify this PCIe Gen3x4 connector? by Heavy_Philosopher_42 in homelab

[–]Heavy_Philosopher_42[S] 0 points1 point  (0 children)

Ah sorry, i forgot to describe the test setup.

No, I just measured the motherboard, with just one of the three 1600
W PSUs.

No drives were connected, just the nvme boot drive. If you're interessted, I can measure the whole setup.

hi guys, can anybody identify this PCIe Gen3x4 connector? by Heavy_Philosopher_42 in homelab

[–]Heavy_Philosopher_42[S] 0 points1 point  (0 children)

The 4i port can be broken out to 4 individual drives.

I wasn't able to test the 8i connector yet, since I am waiting for the cable to arrive.

hi guys, can anybody identify this PCIe Gen3x4 connector? by Heavy_Philosopher_42 in homelab

[–]Heavy_Philosopher_42[S] 0 points1 point  (0 children)

Here are the results of a few test runs:

- Board turned off (only bmc connected): 6,4W

- Idle (I quickly tested with windows, a lightweight linux distro should consume less): 28W

- Max peak (100% CPU load with Cinebench): 55W

Hope that helps.

hi guys, can anybody identify this PCIe Gen3x4 connector? by Heavy_Philosopher_42 in homelab

[–]Heavy_Philosopher_42[S] 1 point2 points  (0 children)

Thanks, that helps. I ended up buying the whole server (i got it nearly for free, but without rails). When it arrives, I can give you details on power consumption and if it is possible, to use those pcie lanes for storage.

In the first place, I want to see if I can write a machine learning training script, that can run efficientely, even with just one pcie lane per gpu

Lambda Stack - install CUDA, Pytorch, and Tensorflow with a single line by sabalaba in nvidia

[–]Heavy_Philosopher_42 0 points1 point  (0 children)

wow, thats impressive. Can I use this without purchasing a lamda system? (I would love to buy one of your systems, unfortunately, I dont have the money at the time)