GPU Passthrough on Proxmox: Complete Guide
Step-by-step GPU passthrough setup on Proxmox VE with VFIO, IOMMU groups, and driver blacklisting. Covers Nvidia code 43 fix and performance tuning.
On this page
GPU passthrough is one of those things that sounds straightforward until you actually try it. The idea is simple: take a physical GPU, yank it away from the hypervisor, and hand it directly to a virtual machine. The VM gets bare-metal GPU performance. No emulation, no overhead worth measuring. In practice, you'll wrestle with IOMMU groups, driver conflicts, and cryptic error messages before you get that first successful boot.
I've set this up dozens of times across different hardware — gaming VMs, Plex transcoding boxes, and a couple of ML training rigs. Here's everything I've learned, including the pitfalls that the wiki pages don't warn you about.
Why Bother with GPU Passthrough?
The use cases are more varied than most people think:
- Gaming VMs — Run a Windows VM with near-native GPU performance. Play games that have no Linux support without dual-booting.
- AI/ML workloads — Pass through an Nvidia GPU for CUDA-based training. Your Proxmox host keeps running other VMs while one gets exclusive GPU access.
- Plex/Jellyfin transcoding — Hardware transcoding with Quick Sync (Intel iGPU) or NVENC (Nvidia) inside an LXC or VM.
- Professional applications — CAD, video editing, anything that needs GPU acceleration.
The key insight: the VM gets direct hardware access. There's no virtual GPU in the middle. The guest OS installs native drivers and talks to the card just like it would on bare metal.
Hardware Requirements
Not every system supports passthrough. You need:
CPU with IOMMU support:
- Intel: VT-d (most Core and Xeon processors from the last decade)
- AMD: AMD-Vi (Ryzen, EPYC, most FX processors)
Motherboard with IOMMU enabled in BIOS. Some consumer boards have this disabled or hidden. Check your BIOS under CPU features, chipset configuration, or advanced settings. The setting might be labeled "VT-d," "IOMMU," or "SVM Mode" depending on the vendor.
A GPU in its own IOMMU group. This is the one that bites people. IOMMU groups determine which devices can be independently passed through. If your GPU shares a group with your SATA controller, you'd have to pass through both — which usually isn't what you want.
Two GPUs (or a CPU with integrated graphics). Your Proxmox host needs something to drive its console. If you're passing through your only GPU, you'll need to manage the host headlessly. This is doable but adds complexity.
Enabling IOMMU
GRUB Configuration
Edit the GRUB command line on your Proxmox host:
nano /etc/default/grub
Find the GRUB_CMDLINE_LINUX_DEFAULT line and add the IOMMU parameter:
For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
The iommu=pt flag enables passthrough mode, which improves performance for devices that aren't being passed through by keeping them in a passthrough translation mode rather than full IOMMU translation.
Update GRUB and reboot:
update-grub
reboot
systemd-boot (ZFS root installs)
If you installed Proxmox with ZFS on root, you're probably using systemd-boot instead of GRUB. Edit the kernel command line here:
nano /etc/kernel/cmdline
Add intel_iommu=on iommu=pt (or amd_iommu=on) to the existing line, then refresh:
proxmox-boot-tool refresh
reboot
Verifying IOMMU Is Active
After reboot, check that IOMMU is actually enabled:
dmesg | grep -e DMAR -e IOMMU
You should see output like:
[ 0.012544] DMAR: IOMMU enabled
[ 0.048234] DMAR: Host address width 39
[ 0.048235] DMAR: DMAR table found at 0x000000007a57e000
[ 0.061234] DMAR-IR: Queued invalidation will be enabled to support x2apic and target-mode interrupt remapping.
[ 0.094781] DMAR-IR: Enabled IRQ remapping in x2apic mode
If you see nothing or errors, double-check your BIOS settings. I've had boards where the setting was buried three menus deep under "North Bridge Configuration."
Checking IOMMU Groups
This is the make-or-break step. Run this script to see how your devices are grouped:
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done
done
Save it as iommu-groups.sh, chmod +x, and run it. You're looking for output like:
IOMMU Group 1:
00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-H GT2 [UHD Graphics 630] [8086:3e9b] (rev 00)
IOMMU Group 13:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2060 Rev. A] [10de:1f08] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
01:00.2 USB controller [0c03]: NVIDIA Corporation TU106 USB 3.1 Host Controller [10de:1ada] (rev a1)
01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller [10de:1adb] (rev a1)
Here, the RTX 2060 is in IOMMU Group 13 along with its audio controller, USB controller, and USB-C controller. That's fine — you'll pass through the entire group. The Intel iGPU is in its own group and will drive the host console.
The ACS Override Patch
If your GPU shares an IOMMU group with unrelated devices (common on consumer motherboards with a single PCIe root complex), you have two options:
- Move the GPU to a different PCIe slot that has its own IOMMU group
- Use the ACS override patch — this artificially splits IOMMU groups
For option 2, add to your kernel command line:
pcie_acs_override=downstream,multifunction
Fair warning: this weakens IOMMU isolation. In a homelab, it's an acceptable trade-off. In production with untrusted VMs, think twice. The patch essentially tells the kernel to treat each device as if it has ACS capability even when the hardware doesn't actually enforce it.
Blacklisting Host GPU Drivers
The host must not touch the GPU you're passing through. Blacklist the relevant drivers:
nano /etc/modprobe.d/blacklist.conf
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
blacklist snd_hda_intel
blacklist radeon
blacklist amdgpu
Only blacklist the drivers for the GPU you're passing through. If you're passing an Nvidia card and keeping an AMD card for the host, only blacklist the nvidia/nouveau modules, not radeon/amdgpu.
For snd_hda_intel, be careful — this might also be your onboard audio. If you need onboard audio on the host, skip this line and handle the GPU audio device with VFIO binding instead.
VFIO Configuration
Now tell the VFIO driver to claim your GPU at boot, before any other driver can grab it.
First, get the device IDs from your IOMMU group output. For the RTX 2060 example above:
- GPU:
10de:1f08 - Audio:
10de:10f9 - USB:
10de:1ada - USB-C:
10de:1adb
Create the VFIO config:
nano /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1f08,10de:10f9,10de:1ada,10de:1adb disable_vga=1
softdep nvidia pre: vfio-pci
softdep nouveau pre: vfio-pci
The softdep lines ensure VFIO loads before any nvidia driver tries to claim the device. The disable_vga=1 option prevents the GPU from being used as a VGA device by the host.
Add the VFIO modules to load at boot:
nano /etc/modules
vfio
vfio_iommu_type1
vfio_pci
Update the initramfs and reboot:
update-initramfs -u -k all
reboot
Verify VFIO Claimed the GPU
After reboot:
lspci -nnk -s 01:00
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2060 Rev. A] [10de:1f08] (rev a1)
Subsystem: eVga.com. Corp. Device [3842:2167]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
Subsystem: eVga.com. Corp. Device [3842:2167]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
The critical line is Kernel driver in use: vfio-pci. If it says nvidia or nouveau instead, your blacklist or softdep configuration isn't working. Go back and double-check.
VM Configuration
Creating the VM
In the Proxmox web UI (or via qm create), set up your VM with these specific settings:
- Machine type: q35 — Required for proper PCIe passthrough. The old i440fx machine type only supports PCI, not PCIe.
- BIOS: OVMF (UEFI) — GPU passthrough with legacy BIOS is a nightmare. OVMF works reliably.
- EFI Disk — Add one when prompted. Store it on the same storage as your VM disk.
- CPU type: host — This passes through your actual CPU features. Don't use
kvm64orqemu64— you'll lose performance and potentially break GPU driver installation.
Here's what the VM config looks like in /etc/pve/qemu-server/105.conf:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host,hidden=1,flags=+pcid
efidisk0: local-zfs:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1,rombar=1,x-vga=1
machine: q35
memory: 16384
meta: creation-qemu=9.0.2,ctime=1709742891
net0: virtio=A2:B4:C6:D8:E0:12,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: local-zfs:vm-105-disk-1,iothread=1,size=120G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=a1b2c3d4-e5f6-7890-abcd-ef1234567890
sockets: 1
tpmstate0: local-zfs:vm-105-disk-2,size=4M,version=v2.0
Adding the PCI Device
In the web UI: Hardware > Add > PCI Device. Select your GPU. Enable:
- All Functions — Passes through all functions (GPU, audio, USB) in one go
- PCI-Express — Use PCIe mode instead of legacy PCI
- ROM-Bar — Usually needed. Maps the GPU's option ROM.
- Primary GPU (x-vga) — If this is the VM's primary display output
If you're doing this via CLI:
qm set 105 -hostpci0 0000:01:00,pcie=1,rombar=1,x-vga=1
ROM Bar Issues
Some GPUs need a dumped ROM file to work correctly. Symptoms: VM hangs at boot, black screen, or the GPU shows up in Device Manager with an error.
Dump the ROM (do this before blacklisting the GPU, or from another machine):
cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /usr/share/kvm/gpu-rtx2060.rom
echo 0 > rom
Then reference it in your VM config:
hostpci0: 0000:01:00,pcie=1,romfile=gpu-rtx2060.rom,x-vga=1
The Nvidia Code 43 Fix
This is the single most common issue with Nvidia GPU passthrough. Nvidia drivers detect they're running inside a VM and throw Error 43 in Windows Device Manager. Nvidia did this intentionally to push people toward their GRID/vGPU licensing for virtual environments.
The fix is to hide the hypervisor from the guest. Add these to your VM config:
cpu: host,hidden=1,flags=+pcid
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmoxhv,kvm=off'
The hidden=1 flag tells QEMU to not advertise the hypervisor CPUID bit. The hv_vendor_id gives a custom Hyper-V vendor ID. The kvm=off hides KVM from the guest. Together, the Nvidia driver thinks it's running on bare metal.
As of recent Nvidia drivers (535+), this issue is less prevalent — Nvidia relaxed the restriction on consumer GPUs. But I still apply the fix because removing it later is easier than debugging why your GPU randomly throws code 43 after a driver update.
Performance Tuning
CPU Pinning
By default, the VM's vCPUs can float across any host CPU core. Pinning them to specific cores reduces latency and cache thrashing:
qm set 105 -cpuunits 1024 -cpulimit 0
For more granular control, edit the config directly:
cpu: host,hidden=1,flags=+pcid
affinity: 0-7
This pins the VM to cores 0-7. On a system with hyperthreading, figure out which threads share physical cores:
lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE
0 0 0 0 0:0:0:0 yes
1 0 0 1 1:1:1:0 yes
2 0 0 2 2:2:2:0 yes
3 0 0 3 3:3:3:0 yes
4 0 0 4 4:4:4:0 yes
5 0 0 5 5:5:5:0 yes
8 0 0 0 0:0:0:0 yes
9 0 0 1 1:1:1:0 yes
10 0 0 2 2:2:2:0 yes
11 0 0 3 3:3:3:0 yes
12 0 0 4 4:4:4:0 yes
13 0 0 5 5:5:5:0 yes
Cores 0 and 8 share the same physical core. For best performance, pin to both threads of each physical core: affinity: 0-5,8-13 gives you 6 physical cores with HT.
Hugepages
Hugepages reduce TLB misses for memory-intensive workloads. For a 16 GB VM:
# Reserve 8192 2MB hugepages (= 16 GB)
echo 8192 > /proc/sys/vm/nr_hugepages
Make it persistent:
nano /etc/sysctl.conf
vm.nr_hugepages = 8192
Then enable hugepages in the VM config:
hugepages: 1024
Wait — that hugepages: 1024 in the Proxmox config means "use 1 GB hugepages." For 2 MB hugepages (which is what we allocated), use hugepages: 2. Proxmox handles the mapping.
In my experience, hugepages make a noticeable difference for gaming VMs (fewer micro-stutters) but the improvement for general desktop use is marginal.
I/O Optimization
Use VirtIO for disk and network if your guest supports it:
scsi0: local-zfs:vm-105-disk-1,iothread=1,size=120G,ssd=1,discard=on
scsihw: virtio-scsi-single
net0: virtio=A2:B4:C6:D8:E0:12,bridge=vmbr0
The iothread=1 gives the disk its own thread, reducing contention. ssd=1 tells the guest it's on an SSD (enables TRIM). discard=on passes TRIM through to the host storage.
Common Pitfalls
GPU audio device not passed through. The GPU and its HDMI/DP audio controller are usually in the same IOMMU group. You must pass through both, or the GPU won't initialize correctly. This catches people who try to pass through only the VGA function.
VM won't start: "IOMMU group not viable." Another device in the IOMMU group is bound to a non-VFIO driver. Check lspci -nnk for all devices in the group and ensure they're all using vfio-pci.
Black screen with cursor after Windows boot. Usually a ROM issue. Try providing a dumped ROM file, or toggle the ROM-Bar setting.
Host freezes when VM starts. You're likely trying to pass through the GPU that the host is using for its console. Switch to SSH access and ensure the host is using a different GPU or headless mode.
Performance is worse than expected. Check CPU pinning, make sure you're using cpu: host and not an emulated CPU type, verify hugepages are active, and ensure VirtIO drivers are installed in the guest.
GPU doesn't reset properly between VM reboots. Some AMD GPUs (looking at you, Navi) have a reset bug. The GPU can't be cleanly released and reclaimed without a host reboot. The vendor-reset kernel module (vendor-reset-dkms) helps for some cards. For others, it's a known hardware limitation.
Wrapping Up
GPU passthrough on Proxmox is absolutely worth the setup effort once you get it working. The initial configuration is the hard part — once your IOMMU groups are sorted and VFIO is claiming the right devices, the VM side is straightforward.
My recommendation: start with a clean Proxmox install, verify IOMMU groups before buying hardware, and keep your first passthrough VM simple. Get the GPU working before adding CPU pinning and hugepages. Layer complexity gradually, and you'll have a much easier time debugging when something doesn't work.
If you're planning to pass through to a Windows VM, check out the companion guide on setting up Windows 11 with VirtIO drivers — getting the guest configured properly is the other half of this equation.