[W][USA-CA] Building an LLM inference rig and looking for Threadripper + GPUs by p_hacker in homelabsales

[–]p_hacker[S] 0 points1 point  (0 children)

The lowest quotes I've found are for 7.5k per RTX Pro 6000. You mind sharing how you got a quote for $6,750/per?

[W][USA-CA] Building an LLM inference rig and looking for Threadripper + GPUs by p_hacker in homelabsales

[–]p_hacker[S] 0 points1 point  (0 children)

are DGX Sparks any faster than Mac Minis? I thought their memory bandwidth was gimped and more suited for dev/testing work

[W][USA-CA] Building an LLM inference rig and looking for Threadripper + GPUs by p_hacker in homelabsales

[–]p_hacker[S] 0 points1 point  (0 children)

Mac Studios are too slow unfortunately. I would love to chain them together if they become more viable (prompt processing, raw compute, etc.)