Virtualize TrueNAS SCALE on Proxmox: Complete Setup

Step-by-step guide to running TrueNAS SCALE inside a Proxmox VM with HBA passthrough—get full ZFS NAS flexibility combined with VM snapshots and live migration.

Proxmox Pulse Proxmox Pulse
9 min read
TrueNAS SCALE HBA passthrough ZFS NAS storage virtualization
Server rack with illuminated hard drive bays and layered virtualization planes representing ZFS storage.

If you've spent any time running a homelab, you've heard the debate: TrueNAS on bare metal or virtualized? For years, the conventional wisdom was "bare metal only" due to concerns about storage reliability and ZFS performance. But with modern hardware and Proxmox's PCIe passthrough capabilities, virtualizing TrueNAS SCALE is not just viable—for many setups, it's the better choice.

Why Virtualize TrueNAS SCALE?

Running TrueNAS SCALE inside a Proxmox VM gives you operational capabilities that bare metal simply cannot match:

  • VM snapshots: Capture the entire TrueNAS state before any update. Roll back in seconds if something breaks.
  • Live migration: Move TrueNAS to another Proxmox node in your cluster without downtime.
  • Isolated failure domain: A TrueNAS misbehavior or kernel panic can't take down your entire hypervisor host.
  • Simplified recovery: Restore from Proxmox Backup Server to any cluster node after hardware failure.

The trade-off is a more involved initial setup. You'll need to pass your HBA (Host Bus Adapter) through to the VM so TrueNAS gets direct, unmediated access to the physical drives—which ZFS requires for data integrity guarantees. This guide walks through the full process.

Hardware Requirements and Planning

Before starting, take stock of your hardware.

Minimum VM specs:

  • 4+ vCPUs (TrueNAS recommends at least 2 dedicated cores)
  • 8GB RAM for the VM (16GB+ strongly recommended for a useful ZFS ARC cache)
  • A 16GB+ virtual disk for the TrueNAS OS install

Storage controller requirements:

A dedicated HBA is strongly preferred over sharing the same controller that holds your Proxmox OS drive. The HBA must be in its own IOMMU group—or share one only with devices you're comfortable also passing through—for PCIe passthrough to work cleanly.

HBA cards known to work well with passthrough:

  • LSI 9207-8i or 9300-8i (flashed to IT mode)
  • Broadcom HBA 9400-8i
  • Dell H310 (flashed to IT mode)
  • Any card the TrueNAS hardware compatibility list endorses

IT mode (Initiator Target) is critical. IR (RAID) mode hides individual drives behind a RAID abstraction, which defeats ZFS's need to see raw physical disks.

Enabling IOMMU on Your Proxmox Host

IOMMU is the hardware feature that enables PCIe passthrough. It must be enabled both in your BIOS/UEFI and in the OS kernel.

First, enable VT-d (Intel) or AMD-Vi (AMD) in your BIOS. The exact menu location varies by motherboard but is usually found under CPU configuration or advanced chipset settings.

For GRUB-based Proxmox installs:

nano /etc/default/grub

Modify the GRUB_CMDLINE_LINUX_DEFAULT line:

# Intel
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

# AMD
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

Apply and reboot:

update-grub && reboot

For systemd-boot (Proxmox VE 9+ on UEFI systems):

nano /etc/kernel/cmdline

Append intel_iommu=on iommu=pt to the existing parameters, then:

proxmox-boot-tool refresh && reboot

After reboot, confirm IOMMU is active:

dmesg | grep -e DMAR -e IOMMU
# Should see: DMAR: IOMMU enabled

Checking IOMMU Groups

This step is non-negotiable. PCIe passthrough requires your HBA to be in its own IOMMU group, or share a group only with devices you're passing through together. Run this script to map all groups:

for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done

Look for your HBA in the output. Ideal output looks like this—the HBA alone in its group:

IOMU Group 14 02:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 [1000:0097]

If your HBA shares a group with other devices you need (like a NIC), you may need the pcie_acs_override=downstream,multifunction kernel parameter to split groups. This has security implications and should only be used on trusted hardware in non-multi-tenant environments.

Loading VFIO Kernel Modules

VFIO (Virtual Function I/O) is the Linux kernel framework that handles device passthrough. Load it at boot:

echo vfio          >> /etc/modules
echo vfio_iommu_type1 >> /etc/modules
echo vfio_pci      >> /etc/modules

update-initramfs -u -k all

Binding the HBA to VFIO

The Proxmox host must release the HBA to VFIO so the VM can claim it exclusively.

Get the PCI vendor and device IDs for your HBA:

lspci -n | grep 02:00.0
# Example output: 02:00.0 0107: 1000:0097

Bind those IDs to the VFIO driver at boot:

echo "options vfio-pci ids=1000:0097" > /etc/modprobe.d/vfio.conf

If the HBA currently uses a kernel driver like mpt3sas, blacklist it:

echo "blacklist mpt3sas" >> /etc/modprobe.d/blacklist.conf
update-initramfs -u -k all
reboot

After reboot, confirm the HBA is now owned by VFIO:

lspci -k -s 02:00.0
# Should show: Kernel driver in use: vfio-pci

Creating the TrueNAS SCALE VM

Download the TrueNAS SCALE ISO from the official TrueNAS site and upload it to your Proxmox ISO storage (local storage by default).

Create the VM with these settings:

qm create 300 \
  --name truenas-scale \
  --memory 16384 \
  --cores 4 \
  --sockets 1 \
  --cpu host \
  --bios ovmf \
  --machine q35 \
  --efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=0 \
  --net0 virtio,bridge=vmbr0 \
  --ostype l26 \
  --scsihw virtio-scsi-pci

Key choices explained:

  • --cpu host — passes host CPU flags through to the VM, improves ZFS ARC and crypto performance
  • --bios ovmf — UEFI boot, required for modern TrueNAS SCALE
  • --machine q35 — required for proper PCIe passthrough support

Add the OS boot disk and attach the installation ISO:

# 16GB OS disk (TrueNAS minimum)
qm set 300 --scsi0 local-lvm:16,ssd=1

# Attach installer
qm set 300 --cdrom local:iso/TrueNAS-SCALE-24.10.2.iso
qm set 300 --boot order=ide2

Attaching the HBA via PCIe Passthrough

qm set 300 --hostpci0 02:00.0,pcie=1,rombar=0
  • pcie=1 — enables PCIe mode (required for most modern HBAs)
  • rombar=0 — disables the Option ROM BAR, which prevents boot hangs on many cards

If your HBA presents multiple PCI functions (e.g., 02:00.0 and 02:00.1), pass them together:

qm set 300 --hostpci0 02:00.0;02:00.1,pcie=1,rombar=0

Installing TrueNAS SCALE

Start the VM and open its console in the Proxmox web UI. The TrueNAS SCALE installer is straightforward:

  1. Select Install/Upgrade
  2. Choose the virtual SCSI disk as the boot target (not any HBA-connected drives)
  3. Set the admin password
  4. Complete the install and allow the reboot

After installation completes, detach the ISO and set the boot order to the SCSI disk:

qm set 300 --cdrom none
qm set 300 --boot order=scsi0

Verifying Drive Passthrough

Once TrueNAS boots, navigate to Storage > Disks in the TrueNAS web UI (accessible at the IP shown on the console). You should see every physical drive connected to your HBA listed individually with their model, serial number, and size.

If drives are missing, troubleshoot in this order:

  1. Confirm lspci -k -s 02:00.0 still shows vfio-pci on the Proxmox host
  2. Verify the hostpci0 entry in /etc/pve/qemu-server/300.conf matches the correct PCI address
  3. Check dmesg inside the TrueNAS VM console for HBA detection messages

Creating Your ZFS Storage Pool

With drives visible in TrueNAS, navigate to Storage > Create Pool:

  1. Name the pool (e.g., tank)
  2. Add your data drives and select a topology
  3. Enable encryption if required

Topology guidance:

  • 2 drives: Mirror (equivalent to RAID 1)
  • 3–5 drives: RAIDZ1 (1-drive fault tolerance)
  • 6–8 drives: RAIDZ2 (2-drive fault tolerance, recommended)
  • 8+ drives: Consider RAIDZ2 or multiple RAIDZ1 vdevs

For spinning drives, a pool recordsize of 1M (configurable per dataset) significantly improves sequential throughput for media and backup workloads.

Network Configuration

Assign TrueNAS a static IP from Network > Interfaces. For a dedicated storage network or VLAN, add a second virtio NIC to the VM:

qm set 300 --net1 virtio,bridge=vmbr1,tag=20

Then configure the interface in TrueNAS under Network > Interfaces with your storage VLAN IP. Keeping NAS traffic on a dedicated interface or VLAN prevents it from saturating your management network during large transfers.

Performance Tuning

ZFS ARC Size

TrueNAS uses up to half of available RAM for ZFS ARC by default. For a 16GB VM allocation, that's up to 8GB of read cache—excellent for frequently accessed datasets. To pin the ARC limit, go to System > Advanced > Sysctl and add:

vfs.zfs.arc.max = 8589934592

This sets a hard 8GB ceiling and prevents TrueNAS from competing with other VMs on the host for memory during heavy workloads.

vCPU Pinning

For consistent NAS throughput, pin the TrueNAS VM's vCPUs to dedicated physical cores so they don't compete with other VMs during sustained I/O:

# Pin VM 300's vCPUs to physical cores 4-7
qm set 300 --affinity 4-7

Check your CPU topology with lscpu to ensure you're pinning cores from the same NUMA node as your HBA's PCIe slot for lowest latency.

Backup Strategy

One of the biggest advantages of this setup is layered protection that bare-metal TrueNAS simply can't offer.

Layer 1 — Pre-update VM snapshots: Before any TrueNAS update, snapshot the VM state:

qm snapshot 300 pre-update-$(date +%Y%m%d) --vmstate 1

Roll back takes under 30 seconds if the update causes issues. Delete the snapshot once you've confirmed the update is stable.

Layer 2 — Proxmox Backup Server: Schedule daily backups of VM 300. Note that PBS will back up the TrueNAS OS disk (the virtio virtual disk), not the HBA-attached data drives—those are handled by TrueNAS itself.

Layer 3 — TrueNAS ZFS snapshots: Configure automatic ZFS snapshot schedules in TrueNAS under Data Protection > Periodic Snapshot Tasks. Hourly snapshots with a 24-hour retention window protect against accidental deletions.

Layer 4 — Offsite ZFS replication: Use TrueNAS's built-in Replication Tasks to push ZFS snapshots to a remote TrueNAS instance, an SSH-capable server, or Proxmox Backup Server's datastore.

Common Issues and Fixes

VM fails to start after Proxmox kernel update: A kernel update can change IOMMU group assignments. Re-run the IOMMU group listing script and verify the HBA is still in its own group and still bound to vfio-pci.

TrueNAS sees drives but pool import fails: This usually means the pool was previously imported on a different system or in a conflicting state. In the TrueNAS shell run zpool import to list importable pools and follow the force-import procedure if needed.

Poor NFS/SMB performance: First check that you're using a virtio NIC (not e1000). Then verify jumbo frames are configured consistently end-to-end if you're using MTU 9000 on your storage network.

Conclusion

Virtualizing TrueNAS SCALE on Proxmox is a genuinely better architecture for most homelab and small-business NAS deployments. HBA passthrough ensures TrueNAS maintains the direct hardware relationship ZFS depends on, while the VM layer adds snapshot-based rollbacks, cluster-level migration, and centralized backup capabilities that no bare-metal setup can match. The IOMMU and VFIO configuration requires attention to detail up front, but the operational payoff—being able to snapshot before any TrueNAS update and roll back in seconds—makes the setup effort worth it many times over.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →