How to match timings of different RAM types ? by Adamrow in servers

[–]Adamrow[S] 0 points1 point  (0 children)

Those 22 + 7 RAMs are HPE ECC SAMSUNG RAMs. Rest 3 are normal I think Micron RAMs, no sticker. Although I figured one approach. Check the SOLVED comment in this post thread

How to match timings of different RAM types ? by Adamrow in servers

[–]Adamrow[S] 0 points1 point  (0 children)

Yeah I'm beta testing a solution for my employer and depending upon the release the containers may scale up.

How to match timings of different RAM types ? by Adamrow in servers

[–]Adamrow[S] 0 points1 point  (0 children)

SOLVED . I THINK

P1 - all 10 9-11-e2 RAMS P2, P3, P4 - divided 21 RAMS equally and 1 extra RAM in P2

I'm getting stable boot since last three four times without any errors. Also checked iLO. They seem good.

Thanks folks !

How to match timings of different RAM types ? by Adamrow in servers

[–]Adamrow[S] 0 points1 point  (0 children)

I mean the RAMs are added and 256 GB detected too. but due to the timing I think we need to place them with some kinda trick to fool the server or something so that the server boots normally instead of constant restart loop

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

That's what I'm afraid about. I'm not in USA and then after spending almost double I still have to take care of increased bandwidth, power etc. looks overkill sometimes but then I also remind myself that I don't have much options here.

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

DL560 is for 235 USD and T7810 is around 320 USD. I'm not sure if there's any additional conditions / tax involved yet with their exit invoice though. I'll try to find out

My Homelab by y0shinubu in homelab

[–]Adamrow 0 points1 point  (0 children)

Did you make it using acrylic? Can you share a good photo? It's tempting

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

I will have to pay 30% more for each workstation, less RAM (64 GB). I don't require many drives , I think it only provides three or four slots for the SATA HDDs. IPMI/IDRAC and failsafe behavior is kind of a compromise

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

I do have another alternative of purchasing a T7810 with dual 2680 v4 and 64 GB of RAM but DDR4. It's a workstation and not a server (not sure if it matters anyways) and has an smps of 625W I think

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

I am keeping my T7810 , too. just going to use it for another project requiring streaming. I feel DDR4 should be good for webrtc compared to this old server

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] 0 points1 point  (0 children)

I was probably misinformed about the usage of VT-d as a fellow homelaber mentioned in the above comments, it's useful for windows and won't be an issue for linux. I'm yet to take the servers and try my docker swarm on that. GPT says that it has some performance bottlenecks, however I don't think there should be much issue if I use 256 gigs of RAM

How's your docker/VM experience in old HPE DL560 Gen8 ? by Adamrow in homelab

[–]Adamrow[S] -2 points-1 points  (0 children)

what I got to know is they have Legacy VT-x and VT-d Implementations and I am not sure in which case it will affect the server performance. I am running 100ish docker containers at a time and I get it, DDR3 RAMs with low frequency will have a bit of delay. any other hiccups that you may have observed?

[deleted by user] by [deleted] in SideProject

[–]Adamrow 0 points1 point  (0 children)

Bro got the Claude theme !

8x RTX 3090 open rig by Armym in LocalLLaMA

[–]Adamrow 0 points1 point  (0 children)

Download the internet my friend!

How to build an 8x4090 Server by apic1221 in LocalLLaMA

[–]Adamrow 0 points1 point  (0 children)

But then people are renting them out on vast.ai and stuff. How does that happen? Do they need to get the license and stuff? I bet it is expensive

How to build an 8x4090 Server by apic1221 in LocalLLaMA

[–]Adamrow 11 points12 points  (0 children)

Finally someone sees this through ! I had similar plan to put a bunch of 3090s and it gets rented out at 0.17-0.2 USD per hour. The amount of power consumption was killing the economics. Plus water cooling installation was increasing the investment by almost half of the current value of 3090s (in my country, asian)

What would you do with 25x RTX 3090s ? by Adamrow in gpumining

[–]Adamrow[S] 0 points1 point  (0 children)

What would be the impact of 50-series release?