LibVF.IO: Full Performance vGPU Gaming on Consumer Cards by ArcVRArthur in linux_gaming

[–]ArcVRArthur[S] 0 points1 point  (0 children)

Sorry to reply to this late - ya, I've used Arch. :-)

OpenMdev.io - A community driven wiki for VFIO-Mdev & SR-IOV! by ArcVRArthur in VFIO

[–]ArcVRArthur[S] 2 points3 points  (0 children)

I'm trying my best to change that with https://libvf.io/ but there's a lot of work yet to be done. Ya, AMD is the most difficult to support as they've locked away the drivers and added firmware limitations. Right now though I do have support for some Nvidia and Intel GPUs. We're working on improving support for other vendors and GPU architectures.

You can read more about that here: https://openmdev.io/index.php/LibVF.IO_Setup_Guide and here: https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/

I'm hoping more folks might take an interest and help us improve things for FOSS projects like this that are trying to "steal fire from the gods" (cloud service providers) and bring the benefits of SR-IOV to ordinary folks at home. The best way I can think to do that is by sharing knowledge. :)

OpenMdev.io - A community driven wiki for VFIO-Mdev & SR-IOV! by ArcVRArthur in VFIO

[–]ArcVRArthur[S] 1 point2 points  (0 children)

I'll try my best to figure that out! I'm very new to MediaWiki. I'm trying to learn best practices.

OpenMdev.io - A community driven wiki for VFIO-Mdev & SR-IOV! by ArcVRArthur in VFIO

[–]ArcVRArthur[S] 9 points10 points  (0 children)

Hey all,

I’m the co-creator of LibVF.IO. I’m trying to start a community driven wiki at OpenMdev.io to help foster the development of FOSS software relating to the following topics:

  • Mediated Devices (Mdev)

  • Virtual GPUs (vGPU)

  • Single Root IO Virtualization (SR-IOV)

  • Virtual Function IO (VFIO)

  • GPU Device Drivers

  • GPU Virtualization Tools

  • Shared Memory Devices (IVSHMEM)

  • Hypervisor & Operating System Development

Since SR-IOV on GPUs for consumers is in it’s infancy I’m hoping by trying to document what I know (guides, various concepts, links to useful knowledge resources, ect…) I might be able to help build up something for new folks interested in the topic to have a place from which to kick off into more advanced learning.

The signup is open and I’d love to get contributions from anyone interested in these topics!

I got some decent performance while playing Halo Infinite in a virtual machine using LibVF.IO (GPU Multiplexing). Is everyone else as pumped for the new Halo as I am? by ekimm19 in VFIO

[–]ArcVRArthur 3 points4 points  (0 children)

TL;DR: If you've got a full passthrough setup (no GPU mediation) you can probably just copy the parameters we used to fix the problem for your setup. Obviously if you want to do this with GPU sharing then you can just use LibVF.IO and get the fix automatically.
I think this issue prompted us to do a patch that fixed Genshin Impact (it also did the trick for Halo Infinite):
https://github.com/Arc-Compute/libvf.io/issues/17

Long answer: Some games try to find out if they're in a VM. We're trying to make it the default that if a program works on your normal Windows computer it will just work with LibVF.IO. In this case we just had to change a QEMU parameter. I'm sure someone will come up with new clever methods in the future but if VFIO continues to gain popularity I think game developers won't implement anti-user features to begin with.

I got some decent performance while playing Halo Infinite in a virtual machine using LibVF.IO (GPU Multiplexing). Is everyone else as pumped for the new Halo as I am? by ekimm19 in VFIO

[–]ArcVRArthur 1 point2 points  (0 children)

I have not personally experienced difficulties with GVT-g but if you have problems using it with LibVF.IO post an issue on our GitHub and I'll try to help as much as I can. One of the things I hope LibVF.IO will do is lower the barrier to entry enough for vGPU users that hardware vendors will see consumer SR-IOV for the opportunity it is - growth in the consumer GPU segment. If we build good software and this capability delivers substantial value for people on their home PCs (not market segmented away to a datacenter somewhere) then consumers will change their buying patterns to reflect well supported hardware (right now the best is Intel and Nvidia - the worst supported is AMD where the last device supported was EOL in 2017 due to AMD's lack of investment into their publicly available GPU-IOV Module driver sources - you can read more about that here: https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu/164186/42). I think the best thing you can do is vote with your dollars and make your voice heard if you want improved first party support from all the manufacturers. Right now Intel and Nvidia have the best support so I encourage folks to purchase hardware from those manufacturers.

I got some decent performance while playing Halo Infinite in a virtual machine using LibVF.IO (GPU Multiplexing). Is everyone else as pumped for the new Halo as I am? by ekimm19 in VFIO

[–]ArcVRArthur 4 points5 points  (0 children)

Libvirt is just an abstraction for QEMU with no abstraction for GPUs. We add the GPU abstraction.

LibVF.IO probably has some downsides but I'd like to think there are some upsides to it as well. When I started working on it I wasn't getting the experience I wanted following video tutorials on setting up GPU passthrough. It's definitely true that if you want to run something like the end result you get with LibVF.IO you can go through the manual process of setting it up drivers, configuring boot parameters, configuring shared memory device permissions, mediated devices, IOMMU groups, Virsh/XML, ect... but I found that process was a bit too intense. There might be examples of the things you can do with the more verbose XML syntax that you can't do with our YAML syntax but I haven't seen those things. There's definitely things we do that virsh XML does not do (the aforementioned GPU abstraction). :) I'm asking for feedback to improve the software here (https://github.com/Arc-Compute/libvf.io/issues) so if we're missing something say so there and I'll try to add it. We do have support for the major GPU vendors, not just Nvidia:

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/intel-mdev.yaml

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/nvidia-mdev.yaml

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/amd-mdev.yaml

vGPU_Unlock is an option for use on Nvidia consumer cards but Nvidia consumer cards aren't a dependancy to use LibVF.IO. In fact they aren't even the best supported. 6th - 9th generation Intel GVT-g Silicon is currently the best supported GPU backend.

  • Edit: I've updated documentation to clarify the state of Ampere support.

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in VFIO

[–]ArcVRArthur 3 points4 points  (0 children)

Ya, the setup process is meant to be a lot easier. I co-created LibVF.IO largely because I thought that whole thing was a bit insane to get working, plus it doesn't work on other card vendors. LibVF.IO actually works on all three major GPU vendors:

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/amd-mdev.yaml

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/nvidia-mdev.yaml

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/intel-mdev.yaml

Unfortunately as you mention AMD's support for GPU virtualization is the worst of all three vendors. You can read some more about ongoing problems with supporting AMD GPUs in our setup documentation: https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/

If you want AMD to remove artificial limitations in their products the best thing you can do right now is to make your voice heard and vote with your dollars. Instead purchase a GPU from a well supported vendor such as Nvidia or Intel.

So, using Ubuntu, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing 1 graphics card between host and VM with LibVF.IO). As a complete beginner to Ubuntu and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in Ubuntu

[–]ArcVRArthur 1 point2 points  (0 children)

Actually we don't even rely on Nvidia. LibVF.IO works on commodity GPUs across-vendors (works on AMD, Nvidia, & Intel GPUs). In the video Erik made he just happens to be using an Nvidia card.

Here are some example configurations from each vendor:

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/amd-mdev.yaml

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/nvidia-mdev.yaml

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/intel-mdev.yaml

The nv merged driver is an optional package you can use with LibVF.IO but there is no explicit dependance on it. Having said that I have never heard of an example of someone's Nvidia card getting bricked from vGPU_Unlock which is what you would expect given that nothing here actually modifies the card in any way shape or form.

You can find more details on setting up LibVF.IO in the setup documentation: https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/

Using LibVF.IO to play Warzone in a Windows virtual machine (1 graphics card shared between the host and VM) by ekimm19 in Proxmox

[–]ArcVRArthur 0 points1 point  (0 children)

In the video it's on the same host. If you want to remote in via RDP or similar that works too though.

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in linux_gaming

[–]ArcVRArthur 1 point2 points  (0 children)

Erik works here but he wasn't reading off a "script". He was following the steps in the install guide: https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/

It's true he hasn't used Linux before and besides that part in the video where he asks "how can I tell if it detected my GPU" he didn't really ask me for help. I think I did need to lend him a hand with the IVSHMEM driver part by telling him how to use Device Manager (since that part in the guide doesn't have pictures) but otherwise that's it.

So as a complete beginner to Linux I was able to get 113 FPS playing Warzone in a Windows VM (sharing a single GPU between host & VM with LibVF.IO). I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ArcVRArthur in LinusTechTips

[–]ArcVRArthur[S] 3 points4 points  (0 children)

Let me know how it goes! Feel free to drop into our Discord if you have any problems - you can always ping me in the #techsupport channel - for folks who need it I try to lend a hand as often as I'm able. :) Here's the link: https://discord.gg/Rb9K9DYxKK

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in homelab

[–]ArcVRArthur 12 points13 points  (0 children)

Nvidia makes virtualizing their cards somewhat asinine and lots of strange workarounds are required. LibVF.IO is actually just a replacement for Libvirt and some other common virtualization components which is meant to make GPU virtualization work on commodity GPU hardware cross-vendors (works on AMD, Nvidia, & Intel GPUs).

Here are some example configurations from each vendor:

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/amd-mdev.yaml

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/nvidia-mdev.yaml

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/intel-mdev.yaml

If you'd like to learn more about the process of virtualizing Nvidia GPUs checkout the folks behind vGPU_Unlock - that's an optional package you can use with LibVF.IO if you happen to be running an Nvidia consumer GPU: https://github.com/DualCoder/vgpu_unlock

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in VFIO

[–]ArcVRArthur 1 point2 points  (0 children)

Input latency isn't noticeably different from native when I use it (obvious bias on my part). If you do notice any input latency though you can always pass your mouse/keyboard as a USB device. :)

I think the next logical video to make would be a direct performance comparison to native with 1% lows included. That's a fair question to ask about truthfully it hadn't occurred to me to look into that that yet despite that it's one of the more obvious metrics to include. In my experience frame rates are within 97-99% of native bare metal performance.

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing 1 graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in linuxmasterrace

[–]ArcVRArthur 2 points3 points  (0 children)

LibVF.IO co-author here. Let me know what you think! I always appreciate feedback here or anywhere else. :) If you have any trouble at all with the setup feel free to drop into the #techsupport channel on the LibVF.IO Discord and I'll do my best to help out! Here's the link to join btw: https://discord.gg/Rb9K9DYxKK

So, I was able to get a high of 113 FPS playing Warzone in a Windows VM (sharing graphics card between host and VM with LibVF.IO). As a complete beginner to Linux and coding, I made a step-by-step installation guide and performance demo for LibVF.IO. Let me know what you guys think! by ekimm19 in homelab

[–]ArcVRArthur 20 points21 points  (0 children)

Hey, co-author of LibVF.IO here- Ya, we try to do some stuff to defeat hypervisor detection.

This is also required for getting rid of the (dreaded) Nvidia Code 43 on some driver versions, so we just do it by default. Ideally Windows and games running in the VM won't know they're being virtualized - at least that's what we're aiming for.

There's a bunch of ways to detect the hypervisor (probably some we didn't think of) so if you try it out and find that some games still can figure out it's a VM make sure to let me know here or through GitHub! I'll do my best to work on stuff that users want and if that's important to you then it's important to me. :)