Improved hardware has made PCI/PCIE (like GPU) passthrough support better. However, issues can still arise when passing through GPUs and other PCI/PCIE devices to Proxmox VE virtual machines. This article covers common problems and their solutions for PCI/PCIE passthrough on Proxmox VE.
Table of Contents
- What to do if IOMMU Interrupt Remapping is not Supported?
- What to do if My GPU (or PCI/PCIE Device) is not in its own IOMMU Group?
- How do I Blacklist AMD GPU Drivers on Proxmox VE?
- How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?
- How do I Blacklist Intel GPU Drivers on Proxmox VE?
- How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?
- I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
- I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
- I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
- Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?
- Why Disable VGA Arbitration for the GPUs and How to Do It?
- What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?
- GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?
- What is AMD Vendor Reset Bug and How to Solve it?
- How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?
- What to do If Some Apps Crash the Proxmox VE Windows Virtual Machine?
- How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?.
- How to Update Proxmox VE initramfs?
- How to Update Proxmox VE GRUB Bootloader?
What to do If IOMMU Interrupt Remapping is not Supported?
IOMMU interrupt remapping is needed for PCI/PCIE passthrough.
To see if your processor supports IOMMU interrupt remapping, use this command:
$ dmesg | grep -i remap
If supported, you’ll see output confirming interrupt remapping is enabled. If there’s no output, it’s not supported.
If your processor doesn’t support IOMMU interrupt remapping, configure unsafe interrupts to passthrough PCI/PCIE devices.
Step 1: Create a new file iommu_unsafe_interrupts.conf
in the /etc/modprobe.d
directory and open it using the nano
text editor:
$ nano /etc/modprobe.d/iommu_unsafe_interrupts.conf
Step 2: Add the following line to the iommu_unsafe_interrupts.conf
file:
options vfio_iommu_type1 allow_unsafe_interrupts=1
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
What to do if my GPU (or PCI/PCIE Device) is not in its own IOMMU Group?
Enabling ACS Override Kernel Patch
Enabling the ACS override kernel patch can help to place the GPU in its own IOMMU group.
Step 1: Open the /etc/default/grub
file using the nano text editor:
$ nano /etc/default/grub
Step 2: Add the kernel boot option pcie_acs_override=downstream
at the end of the GRUB_CMDLINE_LINUX_DEFAULT
line.
Step 3: Save the file by pressing Ctrl
+ X
, then Y
, and finally Enter
.
Step 4: Update the Proxmox VE GRUB bootloader for the changes to take effect.
Moving the GPU to a Different PCI/PCIE Slot
If your server has multiple PCI/PCIE slots, moving the GPU to a different slot may place the GPU in its own IOMMU group.
If the first method doesn’t work, use pcie_acs_override=downstream,multifunction
instead, to improve IOMMU grouping.
Using pcie_acs_override
can cause security and stability issues by fooling the kernel into thinking PCIE devices are isolated when they aren’t. Start with the less aggressive pcie_acs_override=downstream
option and only use pcie_acs_override=downstream,multifunction
if needed.
How do I Blacklist AMD GPU Drivers on Proxmox VE?
If you want to passthrough an AMD GPU on Proxmox VE virtual machines, you must blacklist the AMD GPU drivers.
Step 1: Open the /etc/modprobe.d/blacklist.conf
file using the nano text editor:
$ nano /etc/modprobe.d/blacklist.conf
Step 2: Add the following lines to the /etc/modprobe.d/blacklist.conf
file:
blacklist radeon
blacklist amdgpu
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
How do I Blacklist NVIDIA GPU Drivers on Proxmox VE?
If you want to passthrough an NVIDIA GPU on Proxmox VE virtual machines, you must blacklist the NVIDIA GPU drivers.
Step 1: Open the /etc/modprobe.d/blacklist.conf
file using the nano text editor:
$ nano /etc/modprobe.d/blacklist.conf
Step 2: Add the following lines to the /etc/modprobe.d/blacklist.conf
file:
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
How do I Blacklist Intel GPU Drivers on Proxmox VE?
If you want to passthrough an Intel GPU on Proxmox VE virtual machines, you must blacklist the Intel GPU drivers.
Step 1: Open the /etc/modprobe.d/blacklist.conf
file using the nano text editor:
$ nano /etc/modprobe.d/blacklist.conf
Step 2: Add the following lines to the /etc/modprobe.d/blacklist.conf
file:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
How to Check if my GPU (or PCI/PCIE Device) is Using the VFIO Driver on Proxmox VE?
To verify that your GPU or PCI/PCIE devices are using the VFIO driver, run this command:
$ lspci -v
If the device is using the VFIO driver, you’ll see the line Kernel driver in use: vfio-pci
.
I Have Blacklisted the AMU GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
If blacklisting the AMD GPU drivers isn’t enough, configure the AMD GPU drivers to load after the VFIO driver.
Step 1: Open the /etc/modprobe.d/vfio.conf
file using the nano text editor:
$ nano /etc/modprobe.d/vfio.conf
Step 2: Add the following lines to the /etc/modprobe.d/vfio.conf
file:
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
I Have Blacklisted the NVIDIA GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
If blacklisting the NVIDIA GPU drivers isn’t enough, configure the NVIDIA GPU drivers to load after the VFIO driver.
Step 1: Open the /etc/modprobe.d/vfio.conf
file using the nano text editor:
$ nano /etc/modprobe.d/vfio.conf
Step 2: Add the following lines to the /etc/modprobe.d/vfio.conf
file:
softdep nouveau pre: vfio-pci
softdep nvidia pre: vfio-pci
softdep nvidiafb pre: vfio-pci
softdep nvidia_drm pre: vfio-pci
softdep drm pre: vfio-pci
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
I Have Blacklisted the Intel GPU Drivers, Still, the GPU is not Using the VFIO Driver, What to Do?
If blacklisting the Intel GPU drivers isn’t enough, configure the Intel GPU drivers to load after the VFIO driver.
Step 1: Open the /etc/modprobe.d/vfio.conf
file using the nano text editor:
$ nano /etc/modprobe.d/vfio.conf
Step 2: Add the following lines to the /etc/modprobe.d/vfio.conf
file:
softdep snd_hda_intel pre: vfio-pci
softdep snd_hda_codec_hdmi pre: vfio-pci
softdep i915 pre: vfio-pci
Then, press Ctrl
+ X
, then Y
, and finally Enter
to save.
Step 3: Update the initramfs of your Proxmox VE server.
Single GPU Used VFIO Driver, But When Configured a Second GPU, it Didn’t Work, Why?
In the /etc/modprobe.d/vfio.conf
file, the IDs of all PCI/PCIE devices using the VFIO driver must be in a single line. One device per line won’t work.
For example, if you have 2 GPUs that you want to configure to use the VFIO driver, you must add their IDs in a single line in the /etc/modprobe.d/vfio.conf
file as follows:
options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>
To add another GPU, append it to the existing vfio-pci
line:
options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio>,<GPU-3>,<GPU-3-Audio>
Do not specify PCI/PCIE IDs this way:
options vfio-pci ids=<GPU-1>,<GPU-1-Audio>
options vfio-pci ids=<GPU-2>,<GPU-2-Audio>
options vfio-pci ids=<GPU-3>,<GPU-3-Audio>
Why Disable VGA Arbitration for the GPUs and How to Do It?
If using UEFI/OVMF BIOS on the Proxmox VE virtual machine, disable VGA arbitration to reduce legacy code during boot.
To disable VGA arbitration for the GPUs, add disable_vga=1
to the end of the vfio-pci option
in the /etc/modprobe.d/vfio.conf
file:
options vfio-pci ids=<GPU-1>,<GPU-1-Audio>,<GPU-2>,<GPU-2-Audio> disable_vga=1
What if my GPU is Still not Using the VFIO Driver Even After Configuring VFIO?
Disabling the Video Framebuffer
If your GPU still doesn’t use the VFIO driver, boot Proxmox VE with kernel options that disable the video framebuffer.
Step 1: Open the GRUB bootloader configuration file /etc/default/grub
using the nano text editor:
$ nano /etc/default/grub
Step 2: Add the kernel option initcall_blacklist=sysfb_init
at the end of the GRUB_CMDLINE_LINUX_DEFAULT
line.
Step 3: Save the file by pressing Ctrl
+ X
, then Y
, and finally Enter
.
Step 4: Update the Proxmox VE GRUB bootloader for the changes to take effect.
On older Proxmox VE versions (7.1 and older), use these options instead: nofb nomodeset video=vesafb:off video=efifb:off video=simplefb:off
.
GPU Passthrough Showed No Errors, But I’m Getting a Black Screen on the Monitor Connected to the GPU Passed to the Proxmox VE VM, Why?
Step 1: After passing a GPU to a Proxmox VE virtual machine, use the Default
Graphics card before starting the VM.
This lets you access the VM’s display from the Proxmox VE web UI, download the GPU driver installer, and install it.
Step 2: After installing the GPU driver, the VM’s screen will show on the monitor connected to the passed GPU.
Step 3: Power off the VM and set the Display Graphic card
of the VM to none
.
The VM’s screen will then display only on the monitor connected to the passed GPU, providing a direct experience.
Avoid using SPICE
, VirtIO GPU
, and VirGL GPU Display Graphic card
on VMs configured for GPU passthrough, as this can cause issues.
What is AMD Vendor Reset Bug and How to Solve it?
AMD GPUs have a “vendor reset bug”: after passing an AMD GPU to a Proxmox VE VM, you might not be able to reuse it in another VM. The Proxmox VE server may also become unresponsive.
This is because AMD GPUs can’t properly reset after being passed to a VM. To fix this, the AMD GPU must be reset correctly using the vendor reset tool. See this article and this thread on the Proxmox VE forum. You can also check the vendor reset GitHub page.
How to Provide a vBIOS for the Passed GPU on a Proxmox VE Virtual Machine?
If the GPU is in the first slot of the motherboard, you might not be able to passthrough the GPU by default. This happens because some motherboards shadow the vBIOS.
The solution is to install the GPU in the second slot, extract its vBIOS, then install it back in the first slot, and passthrough the GPU with the extracted vBIOS.
- To learn how to extract the vBIOS of your GPU, read this article.
Step 1: Store the vBIOS file in the /usr/share/kvm/
directory of your Proxmox VE server.
Step 2: Configure your virtual machine to use the stored vBIOS file.
Since there is no way to specify the vBIOS file from the Proxmox VE web management UI, you will have to do it from the Proxmox VE shell.
Step 3: Find the Proxmox VE virtual machine configuration files in the /etc/pve/qemu-server/
directory. Each VM has a configuration file named <VM-ID>.conf
.
For instance, use the following command to open the configuration file for VM ID 100:
$ nano /etc/pve/qemu-server/100.conf
Step 4: In the virtual machine configuration file, append romfile=<vBIOS-filename>
to the hostpciX
line that passes the GPU.
For example, if the vBIOS filename is gigabyte-nvidia-1050ti.bin
, and you passed the GPU on the first slot (slot 0), the line in 100.conf
should look like this:
hostpci0: <PCI-ID-of-GPU>,x-vga=on,romfile=gigabyte-nvidia-1050ti.bin
Step 5: Save the virtual machine configuration file and start the virtual machine.
What to do if Some Apps Crash the Proxmox VE Windows Virtual Machine?
Apps like GeForce Experience or Passmark might crash Proxmox VE Windows virtual machines, or you might get a blue screen of death (BSOD). This is caused by the Windows VM trying to access model-specific registers (MSRs) that aren’t available.
The solution is to ignore MSRs messages on your Proxmox VE server.
Step 1: Open the /etc/modprobe.d/kvm.conf
file using the nano text editor:
$ nano /etc/modprobe.d/kvm.conf
Step 2: Add the following line to the /etc/modprobe.d/kvm.conf
file to ignore MSRs:
options kvm ignore_msrs=1
To also disable logging MSRs warning messages, add this line instead:
options kvm ignore_msrs=1 report_ignored_msrs=0
Step 3: Save the /etc/modprobe.d/kvm.conf
file and update the initramfs of your Proxmox VE server.
How to Solve HDMI Audio Crackling/Broken Problems on Proxmox VE Linux Virtual Machines?
If you’re getting bad audio quality on a Linux Proxmox VE virtual machine with a passed-through GPU, enable MSI (Message Signal Interrupt) for the audio device on the VM.
Step 1: On the Linux Proxmox VE virtual machine, open the /etc/modprobe.d/snd-hda-intel.conf
file using the nano text editor:
$ sudo nano /etc/modprobe.d/snd-had-intel.conf
Step 2: Add the following line and save the file:
options snd-hda-intel enable_msi=1
Step 3: Reboot the Linux virtual machine:
$ sudo reboot
Step 4: After rebooting, check if MSI is enabled for the audio device:
$ sudo lspci -vv
If MSI is enabled, you’ll see a specific line in the audio device information.
How to Update Proxmox VE initramfs?
After changes to files in /etc/modules-load.d/
and /etc/modprobe.d/
, update the Proxmox VE 8 initramfs with:
$ update-initramfs -u -k all
Reboot your Proxmox VE server for the changes to take effect:
$ reboot
How to Update Proxmox VE GRUB Bootloader?
After updating the Proxmox VE GRUB boot configuration file /etc/default/grub
, update the GRUB bootloader for the changes to take effect.
To update the Proxmox VE GRUB bootloader with the new configurations, run the following command:
$ update-grub2
Reboot your Proxmox VE server for the changes to take effect:
$ reboot
This article discussed common PCI/PCIE and GPU passthrough issues on Proxmox VE and provided steps to resolve them.