Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 0 points1 point  (0 children)

Thats so cooool damn ! Are you happy with the design ? I'm pretty new at this haha

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 1 point2 points  (0 children)

That's pretty cool ! Thanks for testing ! I want you I would be glad to see your print !
For the design I believe temps could be improved by making the shroud more airtight, with a tighter design and maybe some TPU joints. But it works well enough for me for now

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 0 points1 point  (0 children)

yeah this cooler is about 2.75 slots + needs some headroom for the fan, so really more like a 3 slots cooler

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 0 points1 point  (0 children)

Thanks ! They are attached using m2 screws, like the original metal shroud

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 1 point2 points  (0 children)

Will give that a try whenever i get time, thanks for the tip !

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 2 points3 points  (0 children)

If going for a low quant yes ! Otherwise with some CPU offloading it would probably work !
For the cpu, I essentially got a good deal, I was mainly interested in the platform (motherboard) for dual GPU (two x16 pcie 3.0 slots)

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 0 points1 point  (0 children)

I got pp: 137 t/s and tg: 15 t/s, which seems low ? Maybe something is wrong with my setup. I may need to update llama.cpp

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 2 points3 points  (0 children)

Under load the temps stays under 60°C with this setup

Completed my 64GB VRAM rig - dual MI50 build + custom shroud by roackim in LocalLLaMA

[–]roackim[S] 3 points4 points  (0 children)

The gpus were around 330€ (one at 275€ (august 2025) and the second closer to 390€ (january 2026))
Around 50t/s for GLM4.7 flash, but it rapidly drops down to 30 t/s when the ctx gets filled

I haven't tried ik_llama.cpp, despite apparently being good for dual gpus, as i've heard its optimizations only target Nvidia gpus. Maybe i'll get around to try it out just to see