Proxmox as a NAS: Storage Pitfalls and Best Practices
Discover critical ZFS and storage pitfalls to avoid when using Proxmox VE as a NAS host, protecting your data from common hypervisor-NAS conflicts and misconfigurations.
On this page
The idea is tempting: you've already got Proxmox running, you have ZFS configured for your VMs, and you need network storage. Why not export those ZFS datasets directly as NFS or SMB shares from the Proxmox host? It sounds like eliminating unnecessary complexity, but this path is littered with subtle pitfalls that can corrupt your data, destabilize your hypervisor, or leave you with backups that don't actually work. Here's what you need to know before combining these roles—and how to do it safely if you decide to proceed.
The Core Problem: Conflicting Roles
Proxmox VE is designed as a hypervisor first. Its ZFS integration is built around providing storage for virtual machines and containers—not for serving files to the network. When you add a NAS role to the same system, you're asking the OS to do two jobs with different requirements, different failure modes, and different optimization needs.
The hypervisor needs ZFS to be stable, predictable, and available for VM disk I/O. A NAS needs ZFS optimized for large sequential reads, SMB/NFS export, and user-facing file operations. These aren't always compatible goals, and the conflicts show up in ways that are hard to diagnose.
ZFS ARC Memory Conflicts
The most immediate problem is ZFS ARC (Adaptive Replacement Cache). ZFS uses RAM as a read cache and aggressively claims as much RAM as the OS will allow. On a dedicated NAS, this is exactly what you want. On a Proxmox hypervisor, it's a disaster waiting to happen.
If ZFS ARC consumes too much RAM, your VMs start getting starved. Balloon drivers kick in, VMs begin swapping, and performance collapses. Worse, this can happen gradually as your NAS workload grows—your homelab works fine for months, then mysteriously slows down as your media library fills up.
Setting Proper ARC Limits
You must explicitly limit ZFS ARC on a Proxmox host that's also serving NAS workloads. A common rule of thumb: allocate at most 25% of total RAM to ARC, leaving headroom for VMs and the hypervisor itself.
# Check current ARC size
cat /proc/spl/kstat/zfs/arcstats | grep "^size"
# Or use arc_summary for a full breakdown
arc_summary | grep -E "ARC|Size"
To set a permanent ARC limit, create or edit /etc/modprobe.d/zfs.conf:
# Limit ARC to 4GB on a 32GB system
# Value is in bytes: 4 * 1024^3 = 4294967296
echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.conf
# Apply without rebooting
echo 4294967296 > /sys/module/zfs/parameters/zfs_arc_max
# Rebuild initramfs so it persists
update-initramfs -u
The sysfs write takes effect immediately but doesn't survive a reboot on its own—the modprobe.d entry handles persistence.
Pool Configuration for Mixed Workloads
If you're using the same ZFS pool for VM storage and NAS shares, you're sharing IOPS between two very different workload profiles. VM I/O is typically random small reads and writes (8–64KB). NAS media storage I/O is typically sequential large reads (media streaming, file transfers).
These workloads have conflicting optimal ZFS record sizes. VM workloads perform best with small record sizes (recordsize=16K or 8K). NAS media storage performs best with large record sizes (recordsize=1M).
The solution is separate datasets with appropriate record sizes:
# Create a dedicated dataset for NAS shares with large record size
zfs create -o recordsize=1M rpool/nas-data
# Your VM storage dataset keeps a smaller record size
# (Proxmox typically manages rpool/data for VM disks)
# Create specific NAS shares as child datasets
zfs create rpool/nas-data/media
zfs create rpool/nas-data/backups
zfs create rpool/nas-data/documents
# Verify record sizes
zfs get recordsize rpool/nas-data/media
Network Share Configuration Pitfalls
Proxmox VE doesn't have a built-in SMB/NFS server GUI—that's not its job. You'll need to install and configure Samba or NFS kernel server directly on the Debian host, and this introduces its own risks.
NFS Export from the Proxmox Host
NFS is straightforward to configure but has a critical gotcha: never export the same ZFS dataset that Proxmox is already using as a storage backend. If a dir or zfspool storage backend in Proxmox overlaps path-wise with your NFS export, you can have both Proxmox and the NFS server writing to the same paths simultaneously.
# Install NFS server
apt install nfs-kernel-server
# Edit /etc/exports — only export dedicated NAS datasets
cat >> /etc/exports << 'EOF'
/rpool/nas-data/media 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
/rpool/nas-data/documents 192.168.1.0/24(rw,sync,no_subtree_check)
EOF
# Apply exports
exportfs -ra
# Enable and start
systemctl enable --now nfs-kernel-server
Double-check your Proxmox storage configuration at Datacenter → Storage and verify that none of the directory paths overlap with what you're exporting.
Samba Configuration
For Windows-compatible SMB shares, the Proxmox root user should not own the NAS data directories. Create a dedicated unprivileged system user:
# Install Samba
apt install samba
# Create a dedicated NAS user (no shell, no home directory)
useradd -M -s /sbin/nologin nasuser
smbpasswd -a nasuser
# Set ownership of NAS datasets
chown -R nasuser:nasuser /rpool/nas-data/media
# Add to /etc/samba/smb.conf
cat >> /etc/samba/smb.conf << 'EOF'
[media]
path = /rpool/nas-data/media
valid users = nasuser
read only = no
browseable = yes
create mask = 0664
directory mask = 0775
EOF
systemctl restart smbd nmbd
Snapshot and Backup Conflicts
This is where things get genuinely dangerous. Proxmox Backup Server relies on ZFS snapshots when backing up VMs stored on ZFS. If you're also running automatic ZFS snapshot tools like zfs-auto-snapshot or sanoid for NAS data management, you can end up with snapshot namespace collisions and silent backup failures.
# Check whether any snapshot automation tools are already running
systemctl status zfs-auto-snapshot 2>/dev/null || echo "Not installed"
which sanoid 2>/dev/null || echo "Not installed"
# List all existing ZFS snapshots to understand the current state
zfs list -t snapshot | head -30
If you're running both PBS and a snapshot tool, configure them with distinct naming prefixes and—critically—ensure they manage completely separate datasets. The safest architecture uses separate ZFS pools: one managed entirely by PBS for VM storage, one managed by your snapshot tool for NAS data.
The Recommended Architecture: TrueNAS in a VM
After hitting these pitfalls, most experienced homelab users arrive at the same conclusion: run TrueNAS SCALE (or TrueNAS Core) as a VM inside Proxmox with disk passthrough, rather than running NAS software directly on the Proxmox host.
This separates concerns cleanly. Proxmox manages hypervisor resources; TrueNAS manages storage. TrueNAS gets its own ZFS pool, its own ARC configuration, and its own backup strategy. They don't interfere with each other, and you get TrueNAS's polished NAS UI on top of a rock-solid ZFS implementation.
HBA Passthrough for TrueNAS
For serious NAS workloads, pass a full HBA (Host Bus Adapter) to the TrueNAS VM rather than individual disks. This gives TrueNAS direct control over the drives, enabling proper S.M.A.R.T. monitoring, spin-down, and native ZFS drive management.
# Find your HBA's IOMMU group
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done | grep -i "sas\|sata\|hba\|megaraid"
# Bind the HBA to vfio-pci (replace with your device ID)
echo "options vfio-pci ids=1000:0072" > /etc/modprobe.d/vfio.conf
update-initramfs -u
reboot
# After reboot, add HBA passthrough to TrueNAS VM (ID 102)
qm set 102 -hostpci0 01:00,pcie=1,rombar=0
Individual Disk Passthrough (Budget Option)
If you don't have an HBA or can't do PCIe passthrough, you can pass individual disks to a TrueNAS VM using their persistent identifiers. Never use /dev/sda style paths—device names change across reboots and you risk passing the wrong disk to your VM.
# Find stable disk identifiers
ls -la /dev/disk/by-id/ | grep -v part | grep -v wwn
# Pass disks to VM 102 using serial-based IDs
qm set 102 -scsi1 /dev/disk/by-id/ata-WDC_WD40EFRX_WD-WCC7K3HN1234
qm set 102 -scsi2 /dev/disk/by-id/ata-WDC_WD40EFRX_WD-WCC7K5HN5678
qm set 102 -scsi3 /dev/disk/by-id/ata-WDC_WD40EFRX_WD-WCC7K8HN9012
# Verify the VM config
qm config 102 | grep scsi
Set the SCSI controller to VirtIO SCSI single for best performance with passed disks.
Bind Mounts for LXC Containers: The Clean Alternative
If you don't want to run a full TrueNAS VM but need to share storage with specific services, ZFS dataset bind mounts to LXC containers are a clean and efficient middle ground. This avoids the NFS/SMB layer entirely for self-contained services like Jellyfin or Nextcloud.
# Add a bind mount to an LXC container (edit /etc/pve/lxc/200.conf)
# This mounts the ZFS dataset directly into the container filesystem
mp0: /rpool/nas-data/media,mp=/media,ro=0
# For read-only access (e.g., a transcoding container)
mp1: /rpool/nas-data/media,mp=/media,ro=1
# Apply by restarting the container
pct restart 200
This approach keeps storage access local, avoids network protocol overhead, and still lets you manage the underlying data with ZFS snapshots from the host.
When Direct Host NAS Is Acceptable
Despite all these warnings, there are legitimate cases where running NAS services directly on the Proxmox host makes sense:
- Small homelabs with fewer than 4 VMs and ample RAM where ARC conflicts are manageable with proper limits
- Bind mounts for LXC containers where you only need local storage access, not network shares
- Temporary or low-stakes data where experimentation risk is acceptable
- Hardware-constrained setups where running a dedicated TrueNAS VM isn't feasible
If any of these describe your situation, proceed carefully with the ARC limits and dataset separation strategies outlined above.
Best Practices Summary
If you decide to proceed with Proxmox as a NAS host, follow these rules:
- Set ARC limits before adding any NAS workload. Aim for 20–25% of total RAM maximum for a hypervisor.
- Use separate ZFS pools for VM storage and NAS data where possible—pool-level separation eliminates most conflicts.
- Match dataset record sizes to workload type. VM datasets get small record sizes; media/archive datasets get large ones.
- Never overlap Proxmox storage backends with NAS export paths. Keep these completely separate in the filesystem hierarchy.
- Assign snapshot management to one tool per dataset. Don't let PBS and sanoid both manage the same datasets.
- Prefer the TrueNAS VM architecture for any serious NAS workload with more than a few terabytes or multiple concurrent users.
- Monitor ARC and VM memory regularly using Proxmox's built-in metrics and
arc_summaryon the host.
Conclusion
Using Proxmox VE as a NAS is possible, but it requires deliberate configuration and a clear understanding of the failure modes. The ZFS ARC conflict alone has caused countless homelab users to wonder why their VMs started running slowly after months of smooth operation. The snapshot and backup interaction issues are subtle but can mean your backups silently fail exactly when you need them.
The cleanest solution for most homelab setups remains the TrueNAS VM with disk passthrough—you get a dedicated, feature-complete NAS operating system managing your storage while Proxmox focuses on what it does best. If you decide to run NAS services directly on the Proxmox host, set your ARC limits first, keep VM storage and NAS storage on completely separate datasets or pools, and monitor the system closely as your storage usage grows.