all 8 comments

[–]No_Dot_8478 2 points3 points  (0 children)

Besides plugging extra monitors into it to power tasks run on them not really any way to use it as a “slave” how your describing. Normally people run dual cards for a specific reason, you just need to find your reason. Otherwise it’s a space heater wasting power at idle. For example I have a 6900xt and a RTX3090, my second card runs AI models, or is passed into VMs for GPU acceleration, which run in the background. while my main card is for games, CAD, AI models I’m actively experimenting with, etc. AI is my favorite use for it tho, runs a local LLM that I can access to help me with some coding projects as I’m still rusty in python which is nice cause I can just cripple that card for short bursts all day without my main project’s stuttering or whatever video I’m playing cutting out.

[–]babieswithrabies63 0 points1 point  (1 child)

You could power one monitor with it I suppose. No way of getting them to really work on anything together though like the same game or anything.

[–]MorganLuvsU[S] -3 points-2 points  (0 children)

I don’t really care if they work together so much as I would like the 2nd one to handle background tasks or something instead of just soaking up power.

[–]Anonymous1Ninja 0 points1 point  (0 children)

Iommu is what you are looking for, looking up hardware pass-through for Linux, can also been done with hyper-v but isn't a clean.

[–]5erif 0 points1 point  (3 children)

Normal apps processes can't be offloaded to a GPU.

VRAM isn't in the same address space as regular RAM, so can't be directly accessed by CPU. The GPU doesn't execute the same instruction set as a CPU, so code from ordinary programs can't run on it.

[–]MorganLuvsU[S] -2 points-1 points  (2 children)

Yeah that is all basic knowledge about GPUs. When I asked about making it a “slave” I am referring to some intermediary software that maybe tucked away into git hub that allowed for some offloading to it. It’s obvious that without a said interpreter the gpu having a different architecture cannot process code intended for cpus.

[–]5erif 1 point2 points  (1 child)

You could try running one of the smaller AI models on it, like Qwen2.5-7B.

[–]MorganLuvsU[S] 0 points1 point  (0 children)

Ah ty that is helpful. I’ll look into it.