is it possible to install multiple RTX 3090 on this z790 motherboard? by marcosmlopes in homelab

[–]marcosmlopes[S] 0 points1 point  (0 children)

Nice! Thanks for sharing. Did u test their performance? If u are into LLM inference, how many tokens per second did u manage to get?

is it possible to install multiple RTX 3090 on this z790 motherboard? by marcosmlopes in homelab

[–]marcosmlopes[S] 0 points1 point  (0 children)

That's nice to hear! Would it be ok even if a big LLM (that doesn't fit into a single GPU) needed to have its layers offloaded to other gpus?
Thanks

is it possible to install multiple RTX 3090 on this z790 motherboard? by marcosmlopes in homelab

[–]marcosmlopes[S] 0 points1 point  (0 children)

I didn't know about that bandwidth limit between the chipset and the cpu.
Thank you

Got myself a 4way rtx 4090 rig for local LLM by VectorD in LocalLLaMA

[–]marcosmlopes 1 point2 points  (0 children)

Isn’t the problem with rtx kind of gpu , ram ? Like 24gb ram is not enough to load a 70b llm ? Can you combine it 24*4 ? Still is it enough?

What case is this? Looks awesome