Migrate Bare-Metal TrueNAS to Proxmox Without Data Loss
Move your TrueNAS bare-metal installation to a Proxmox VM without touching your ZFS pools. HBA passthrough, disk passthrough, and pool import — step by step.
On this page
The safest way to move a running bare-metal TrueNAS machine to a Proxmox VM is to pass your storage controller directly to the guest — either an HBA via PCIe passthrough or individual drives via disk passthrough. Done right, TrueNAS imports its existing ZFS pools on first boot inside the VM, your data stays intact, and Proxmox never touches the pool metadata.
Key Takeaways
- HBA passthrough: The cleanest path — pass the entire controller to the VM so Proxmox never sees the pool drives.
- Disk passthrough: Works when you can't pass the full HBA; always use
/dev/disk/by-id/paths, never/dev/sdX. - Export pools first: If Proxmox has auto-imported your ZFS pools on the host, export them before attaching drives to the VM.
- TrueNAS SCALE 24.10: Runs as a Proxmox VM with minor config tweaks; TrueNAS CORE works identically.
- Risk window: Data loss is most likely during the brief moment when drives are attached to both host and VM — don't let that happen.
Why Virtualize TrueNAS Instead of Running It Bare Metal
Running TrueNAS on bare metal is fine until you want to share the server. A dedicated NAS machine locks up hardware that could also run VMs, containers, and backup jobs. Virtualizing on Proxmox gives you VM-level snapshots before TrueNAS updates, flexible resource allocation without touching hardware, and one unified management UI for your NAS and your homelab VMs.
The tradeoff: the storage controller must be either passed through to the VM or replaced with virtual block devices. If your pools live on a dedicated PCIe HBA, passthrough is straightforward. If they're on motherboard SATA ports, you'll need individual disk passthrough — which introduces IOMMU group constraints covered below.
What You Need Before You Start
- CPU with VT-d (Intel) or AMD-Vi enabled in UEFI
- At least 16 GB RAM (TrueNAS SCALE 24.10 enforces this minimum)
- A separate boot drive for Proxmox — not one of your pool drives
- TrueNAS SCALE 24.10.2 installer ISO
Confirm IOMMU is active on the Proxmox host:
dmesg | grep -e DMAR -e IOMMU | head -20
If the output is empty, enable it in GRUB:
# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
# AMD: replace intel_iommu=on with amd_iommu=on
update-grub && reboot
The iommu=pt flag reduces overhead for devices the host isn't passing through — relevant when Proxmox itself is using NVMe while pool drives go straight to the TrueNAS VM.
How to Stop Proxmox from Importing Your ZFS Pools
This is the most common gotcha. Proxmox boots with pool drives attached, ZFS auto-imports every pool it finds — including TrueNAS pools. If the host holds a pool while the VM tries to import it, you get a failed import or silent metadata corruption.
Check what's already imported:
zpool status
If your TrueNAS pool names appear here, clear them from the cache file and export them:
zpool set cachefile=none tank
zpool export tank
Run this for every pool being migrated. cachefile=none removes the pool from /etc/zfs/zpool.cache so it won't auto-import on the next reboot. Pass those drives to the TrueNAS VM immediately — if you reboot Proxmox first, ZFS will import them again.
Option 1: Pass Through an HBA Controller (Recommended)
For drives connected to a dedicated PCIe HBA — any IT-mode card, LSI 9207-8i, LSI 9300-8i — this is the cleanest approach. The host never sees the drives.
Find the HBA's PCI address:
lspci -nn | grep -Ei "lsi|megaraid|sas|storage controller"
# Example: 02:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS9207-8i [1000:0097]
Verify it's alone in its IOMMU group:
find /sys/kernel/iommu_groups/ -name "0000:02:00.0" 2>/dev/null
# Output: /sys/kernel/iommu_groups/14/devices/0000:02:00.0
ls /sys/kernel/iommu_groups/14/devices/
If the group contains only the HBA (a co-located PCIe root port is fine), add it to the VM after creation:
qm set 110 --hostpci0 02:00.0,pcie=1,rombar=0
pcie=1 is required for modern HBAs. rombar=0 prevents a boot hang seen with some LSI cards inside QEMU.
Option 2: Pass Through Individual Disks
When the SATA controller is part of the motherboard chipset and shared with the Proxmox boot drive, pass individual drives instead. Never use /dev/sdX — those letters reassign at boot. Use stable by-id paths:
ls -la /dev/disk/by-id/ | grep -v part | grep ata
# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1234567 -> ../../sdb
# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7654321 -> ../../sdc
After exporting the pool, attach each drive:
qm set 110 --scsi1 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1234567
qm set 110 --scsi2 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7654321
Repeat for every drive. A 6-drive RAIDZ2 means six qm set commands. This is worth it only if you can't add a dedicated HBA. For long-term use, a used IT-mode card at $20–$40 on eBay is the cleaner investment.
Create the TrueNAS VM
qm create 110 \
--name truenas-scale \
--memory 32768 \
--balloon 0 \
--cores 4 \
--sockets 1 \
--cpu host \
--machine q35 \
--bios ovmf \
--net0 virtio,bridge=vmbr0 \
--ostype l26
--balloon 0 prevents Proxmox from reclaiming RAM that TrueNAS is actively using as ZFS ARC cache.
Add the supporting disks and installer:
# EFI disk for OVMF boot
qm set 110 --efidisk0 local-lvm:1,efitype=4m,pre-enroll-keys=0
# OS boot disk — TrueNAS uses a mirrored boot pool, 32 GB minimum
qm set 110 --scsi0 local-lvm:32,format=raw,iothread=1,ssd=1
# Installer ISO
qm set 110 --ide2 local:iso/TrueNAS-SCALE-24.10.2.iso,media=cdrom
# Boot order: ISO first, then OS disk
qm set 110 --boot order="ide2;scsi0"
The IOMMU mechanics are identical to GPU passthrough on Proxmox. If you've done GPU passthrough before, the hostpci setup will feel familiar — you're just passing storage instead of video. Now attach the HBA or individual disks from the previous sections.
Install TrueNAS and Import Your Existing Pools
Boot the VM. The TrueNAS SCALE 24.10.2 installer completes in under five minutes on NVMe. Select the OS boot disk (scsi0) as the installation target — not the pool drives.
After TrueNAS reboots into the dashboard, go to Storage → Import Pool. TrueNAS scans the attached drives and surfaces your existing ZFS pool. Select it and click Import. For a 20 TB pool this finishes in under 30 seconds — no data moves, only pool metadata is recognized.
Verify from the TrueNAS shell or SSH:
zpool status
zfs list -r
zpool history tank | tail -20
All vdevs should show ONLINE. If your most recent scrub event appears in the history output, the pool imported from the correct state.
Reconfigure Shares and Verify Before Decommissioning
The IP and interface name change because TrueNAS now has a virtio NIC. In the TrueNAS UI, go to Network → Interfaces, set a static IP, then update any SMB or NFS bindings that reference the old interface. Re-enable periodic scrub schedules under Data Protection — they don't carry over with a pool import.
If you have Proxmox Backup Server pointing at a TrueNAS share, update the storage definition with the new IP. The datastore contents are unchanged.
Before wiping the old bare-metal install, run a full scrub inside the VM:
zpool scrub tank
watch zpool status
Expect roughly 1 hour per 10 TB on spinning disks. Also check SMART data on all drives to confirm nothing new appeared during the migration:
for disk in /dev/sdb /dev/sdc /dev/sdd /dev/sde; do
echo "=== $disk ==="
smartctl -a "$disk" | grep -E "Reallocated|Pending|Uncorrectable"
done
Keep the bare-metal server powered off but intact for at least a week. If a missing share or misconfigured permission surfaces, you'll want that fallback.
Conclusion
With an HBA or pool drives passed through directly, TrueNAS imports existing ZFS 2.2.x pools intact and your data never leaves the disks. The critical steps are exporting pools from the Proxmox host before attaching them to the VM, using /dev/disk/by-id/ for individual disk passthrough, and running a post-import scrub to confirm clean pool state. Next, consider hardening the Proxmox host with firewall rules and fail2ban — the NAS is only as secure as the hypervisor it runs on.