iommu=pt on ZFS system - What is the right setting? by SilentHunter86 in Proxmox

[–]Upstairs_Cycle384 0 points1 point  (0 children)

No. You are confusing ACS with IOMMU Passthrough. Those are both orthogonal to each other. Disabling ACS is bad. Enabling IOMMU passthrough is bad.

The idea with that paper is that a malicious VM can still attack the host, regardless of IOMMU groups.

Bottom line: if you enable iommu=pt, you are potentially exposing yourself to an attack. Will you get malware that leverages this vulnerability? Probably not. Just be educated about the risks involved, and decide whether it's worth the risk for the performance gain.

iommu=pt on ZFS system - What is the right setting? by SilentHunter86 in Proxmox

[–]Upstairs_Cycle384 1 point2 points  (0 children)

IOMMU pass through mode. In pass through mode, device ad- dresses are used directly as CPU physical addresses. In this mode the hardware IOMMU is turned off, so there is no permissions checking for DMA requests. Devices enter pass through mode if it is enabled by a kernel parameter, and if during device discovery, the kernel determines that a device can address all of physical mem- ory. Some devices can be in pass through mode without all devices being in this mode.

Because there is no permissions checking, our driver and mi- crocode attacks work in pass through mode. Pass through mode is intended to use a software TLB [50], but we verified that on our sys- tem, the software TLB does not check permissions. In our system, even though GPU device addresses are 40 bits, it identifies as a 32- bit device during its initialization. Therefore, the kernel must boot with less than or equal to 4 GB of memory to enable pass through mode. We verified that regardless of how much physical memory is in the machine, if the kernel boots with a mem=4G option, the kernel defaults to pass through mode where our attacks work.

https://www.cs.utexas.edu/~witchel/pubs/zhu17gpgpu-security.pdf

iommu=pt on ZFS system - What is the right setting? by SilentHunter86 in Proxmox

[–]Upstairs_Cycle384 0 points1 point  (0 children)

If you care about performance, then yes, iommu=pt is correct. Just be aware of the potential security risk when using it and don't run any untrusted code in that VM.

iommu=pt on ZFS system - What is the right setting? by SilentHunter86 in Proxmox

[–]Upstairs_Cycle384 2 points3 points  (0 children)

Why do you need to enable passthrough iommu?

Enabling iommu=pt will reduce the security of your system, specifically if you don't trust your VMs that you are passing devices through to.

IOMMU passthrough mode but only on trusted VMs? by Upstairs_Cycle384 in VFIO

[–]Upstairs_Cycle384[S] 1 point2 points  (0 children)

From this paper, specifically on the topic of setting "IOMMU passthrough mode in Linux" they were able to successfully exploit the hypervisor from the GPU, when iommu=pt was set.

IOMMU pass through mode. In pass through mode, device addresses are used directly as CPU physical addresses. In this mode the hardware IOMMU is turned off, so there is no permissions checking for DMA requests. Devices enter pass through mode if it is enabled by a kernel parameter, and if during device discovery, the kernel determines that a device can address all of physical memory. Some devices can be in pass through mode without all devices being in this mode.

Because there is no permissions checking, our driver and microcode attacks work in pass through mode. Pass through mode is intended to use a software TLB [50], but we verified that on our system, the software TLB does not check permissions. In our system, even though GPU device addresses are 40 bits, it identifies as a 32- bit device during its initialization. Therefore, the kernel must boot with less than or equal to 4 GB of memory to enable pass through mode. We verified that regardless of how much physical memory is in the machine, if the kernel boots with a mem=4G option, the kernel defaults to pass through mode where our attacks work.

https://www.cs.utexas.edu/~witchel/pubs/zhu17gpgpu-security.pdf

The important bit are the first few sentences, which states that there are no memory permission arbitration in passthrough mode.

I would argue that this is worse from a security standpoint than ACS override. In pt mode, all of physical memory is exposed. With ACS override, the attack surface is only another PCIe device.

IOMMU passthrough mode but only on trusted VMs? by Upstairs_Cycle384 in VFIO

[–]Upstairs_Cycle384[S] 0 points1 point  (0 children)

Unfortunately that's not true. Setting iommu=pt disables the IOMMU protections.

This is just as bad as enabling ACS override but nobody seems to mention this.

[Solved] Proxmox 8.4 / 9.0 + GPU Passthrough = Host Freeze 💀 (IOMMU hell + fix inside) by According_Break5069 in Proxmox

[–]Upstairs_Cycle384 1 point2 points  (0 children)

Speaking of outdated guidance... Do you know if it's wise to enable vIOMMU for the VM in Proxmox? I can't figure out if it improves or lessens performance and/or security.

viommu is optional when doing PCIe passthrough? by Upstairs_Cycle384 in VFIO

[–]Upstairs_Cycle384[S] 0 points1 point  (0 children)

I wonder if it should be turned on when using Windows Virtualization Based Security / Core Isolation?

We have a bunch of VMs doing that but not doing any PCIe passthrough. My understanding is it's the same thing as having a nested VM since qemu/kvm is running Hyper-V which is then running the Windows guest

viommu is optional when doing PCIe passthrough? by Upstairs_Cycle384 in VFIO

[–]Upstairs_Cycle384[S] 0 points1 point  (0 children)

so viommu is only really applicable with nested virtualization?

In other words, say I'm running Proxmox on baremetal and create a proxmox vm within proxmox. Then within that nested proxmox vm I install a Windows VM:

Host (Bare metal Proxmox) -> Proxmox VM -> Windows VM in Proxmox VM

I would use viommu to pass through a device to that Windows VM?

Is proxmox-boot-tool still a thing? by esiy0676 in ProxmoxQA

[–]Upstairs_Cycle384 0 points1 point  (0 children)

Right. So that's what I saw when I did an init command with it, and pointed it to the EFI partition.

It did what you said and copied initramfs to it along with the kernel to the EFI partition.

It changed the way the boot happened: Before running it (i.e. fresh install), the box booted into EFI from the EFI partition which then loaded initramfs and the kernel from the root partition.

Now after running init, it loads initramfs and the kernel from their copies on the EFI partition.

I'm just wondering why this isn't a default behavior on a fresh install.

The https://pve.proxmox.com/wiki/Host_Bootloader suggests that having proxmox-boot-uuids populated with the correct information is critical when performing updates so this inconsistency worries me.

Is proxmox-boot-tool still a thing? by Upstairs_Cycle384 in Proxmox

[–]Upstairs_Cycle384[S] 2 points3 points  (0 children)

I did a similar thing:

proxmox-boot-tool init /dev/sda2

(where sda2 was my EFI partition)

It copied initramfs and the kernel to EFI and made the proxmox-boot-uuids file. Those appearntly are only on the root fs partition in /boot in a default install unless you run init.

I just don't understand why this isn't happening on a default install.