Add NFS Storage to Proxmox for VM Disks and Backups
Add NFS storage to Proxmox VE for VM disk images, ISO libraries, and VZDump backups. Covers TrueNAS export config, mount options, and the root_squash permission fix.
On this page
Adding NFS storage to Proxmox VE takes about five minutes from the web UI, and once attached, every node in your cluster can use the same share for VM disk images, ISO libraries, container templates, and backup archives — no per-node configuration needed. This guide walks through the complete process: exporting a share from TrueNAS SCALE or a Debian NFS server, attaching it in Proxmox VE 9.1, choosing mount options that actually improve performance, and avoiding the permission pitfalls that trip up almost everyone on first attempt.
Key Takeaways
- Cluster-wide access: NFS storage added under Datacenter → Storage is mounted on all cluster nodes automatically — add it once, use it everywhere.
- Best fit: ISO libraries, container templates, and VZDump backup archives — not the primary disk for write-heavy database VMs.
- root_squash is the biggest gotcha: Most NAS devices enable root_squash by default, which blocks Proxmox from writing disk images as root. Disable it on the export.
- Performance tip: Adding
nconnect=4to mount options delivers a 30–50% throughput increase on 10 GbE without any other changes. - Content types: A single NFS share can simultaneously serve disk images, ISO files, LXC templates, backups, and cloud-init snippets.
When NFS Makes Sense for Proxmox Storage
NFS is not the right answer for every storage use case. Being clear about this upfront saves you a painful storage migration later.
Use NFS when:
- You already have a TrueNAS, Synology, or QNAP NAS with spare capacity and a dedicated storage network or 10 GbE link
- You want a centralized ISO and template library shared across multiple Proxmox nodes without copying files manually to each
- You need a cost-effective backup target for VZDump archives
- Your workloads are sequential or low-IOPS — Home Assistant, lightweight web services, media libraries
Avoid NFS for:
- Databases (PostgreSQL, MySQL) or any workload with heavy random 4K I/O — network round-trip latency kills IOPS; use local NVMe or Ceph RBD instead
- VMs that need fast live migration — NFS migration works, but local NVMe-to-NVMe migration completes in under two minutes for a 50 GB disk; NFS over 1 GbE can take fifteen minutes for the same operation
- Workloads that depend on ZFS-native send/recv replication — once data is on NFS, those features belong to the NAS, not Proxmox
If you are designing a full homelab storage layout, Build a Private Cloud at Home with Proxmox VE covers how to layer NFS alongside local ZFS pools for a balanced setup.
Setting Up the NFS Export
TrueNAS SCALE (Dragonfish 24.10 or later)
In the TrueNAS web UI:
- Go to Shares → NFS → Add
- Set the Path to your dataset (e.g.,
/mnt/tank/proxmox-nfs) - Under Advanced Options, add a network entry for your Proxmox subnet (e.g.,
192.168.10.0/24) - Uncheck Enable Root Squash — Proxmox must write as root for disk image operations
- Save and confirm the NFS service is running under Services → NFS
Verify the export is visible from a Proxmox node:
showmount -e 192.168.10.50
Expected output:
Export list for 192.168.10.50:
/mnt/tank/proxmox-nfs 192.168.10.0/24
Debian 12 or Ubuntu NFS Server
If you are running a DIY NFS server:
apt install nfs-kernel-server
Edit /etc/exports:
/srv/proxmox-nfs 192.168.10.0/24(rw,sync,no_root_squash,no_subtree_check)
Apply the changes:
exportfs -rav
systemctl restart nfs-kernel-server
Always include no_subtree_check — it eliminates a performance-killing consistency check that fires on every file access when subtree checking is enabled.
For production environments, put NFS traffic on a dedicated storage VLAN to isolate backup and ISO transfer load from your management network. Configuring VLANs on Proxmox with Linux Bridges covers that setup in full if you have not done it yet.
How to Add NFS Storage in the Proxmox Web UI
- Log into the Proxmox web UI at
https://<node-ip>:8006 - Navigate to Datacenter → Storage → Add → NFS
- Fill in the form:
- ID: A short identifier with no spaces (e.g.,
nas-proxmox) - Server: The NAS IP or hostname (e.g.,
192.168.10.50) - Export: Click the dropdown — Proxmox runs
showmountagainst the server and lists available exports automatically - Content: Check all types this share will serve:
Disk image,ISO image,Container template,VZDump backup file,Snippets - Max Backups: Set a per-VM retention limit if using this as a VZDump target
- ID: A short identifier with no spaces (e.g.,
- Click Add
The share mounts on all cluster nodes within seconds. Check the Tasks pane at the bottom of the UI to confirm there are no mount errors before creating any VMs against the new storage.
What the Content Types Actually Do
| Content Type | File Format | Typical Use |
|---|---|---|
| Disk image | .raw, .qcow2 |
VM disk images |
| ISO image | .iso |
OS install media |
| Container template | .tar.zst |
LXC base images |
| VZDump backup file | .vma.zst |
VM and container backups |
| Snippets | YAML/JSON | Cloud-init user-data configs |
How to Add NFS Storage via the CLI
The pvesm tool is the right approach when scripting Proxmox node setup or managing storage from Ansible:
pvesm add nfs nas-proxmox \
--server 192.168.10.50 \
--export /mnt/tank/proxmox-nfs \
--content images,iso,vztmpl,backup,snippets \
--options vers=4.1
Verify the storage was added and check capacity:
pvesm status
To list the contents of the storage:
pvesm list nas-proxmox
Proxmox writes the storage definition to /etc/pve/storage.cfg, which pmxcfs replicates to all cluster nodes:
nfs: nas-proxmox
path /mnt/pve/nas-proxmox
server 192.168.10.50
export /mnt/tank/proxmox-nfs
content images,iso,vztmpl,backup,snippets
options vers=4.1
NFS Mount Options That Actually Improve Performance
Proxmox passes mount options directly through the options field. These are the ones worth setting:
pvesm set nas-proxmox --options vers=4.1,hard,timeo=600,retrans=2,nconnect=4
| Option | Effect | Recommendation |
|---|---|---|
vers=4.1 |
Forces NFSv4.1 with session trunking | Always prefer v4.1 over v3 for cluster use |
hard |
Retries indefinitely if the server becomes unreachable | Required for VM disks — soft will corrupt data |
timeo=600 |
60-second timeout before retry (units: 0.1 second) | Increase on networks with occasional latency spikes |
retrans=2 |
Retries before reporting an error | 2 is fine on stable LANs; default is 3 |
nconnect=4 |
Opens 4 parallel TCP connections to the NFS server | 30–50% throughput increase on 10 GbE (kernel 5.15+) |
noatime |
Skips access-time write on reads | Minor write reduction on ISO and backup shares |
nconnect=4 is the highest-impact single option if you are on 10 GbE. Benchmark before and after with:
dd if=/dev/zero of=/mnt/pve/nas-proxmox/test.img bs=1M count=1024 oflag=direct
rm /mnt/pve/nas-proxmox/test.img
On a TrueNAS SCALE system with an NVMe-backed pool and a direct 10 GbE connection, nconnect=4 typically moves sequential write throughput from around 350 MB/s to 700–900 MB/s. On 1 GbE, the difference is negligible — the link is already saturated long before the connection count matters.
Using NFS as a Proxmox Backup Target
Once NFS storage is added with the backup content type enabled, pointing a backup job at it is straightforward:
- Go to Datacenter → Backup → Add
- Set Storage to
nas-proxmox - Choose your Schedule (daily, weekly, or a cron expression)
- Configure Retention (keep last N backups per VM)
- Save — Proxmox handles the rest, writing
.vma.zstarchives directly to the NFS share
For deduplication, encryption, and server-side integrity verification, Proxmox Backup Server is the better tool — it can also use an NFS share as its datastore backing, though local NVMe or a ZFS dataset gives better PBS performance for dedup index operations.
A practical combination that works well: VZDump to NFS for rapid daily snapshots, and PBS on a separate host for deduplicated, encrypted long-term retention with offsite replication.
Common NFS Gotchas on Proxmox
root_squash Blocks Disk Image Creation
This is the most common first-time issue. If you get a Permission denied error when creating a VM disk on NFS storage, the export is almost certainly using root_squash. Proxmox writes disk images as root; root_squash maps root to nobody, which has no write access.
Fix on TrueNAS: uncheck Enable Root Squash in the NFS share settings.
Fix on Linux: change root_squash to no_root_squash in /etc/exports and run:
exportfs -ra
NFSv3 File Locking in a Cluster
NFSv3 uses a separate statd/lockd protocol for file locking. Under high concurrency — two Proxmox nodes creating disk images simultaneously — stale lock files can accumulate and cause flock failures. NFSv4.1 (vers=4.1) handles locking natively and eliminates this class of problem entirely.
NFS Storage Shows as Unavailable After a NAS Reboot
Proxmox marks NFS storage unavailable if the mount times out during node boot. After the NAS comes back online, re-trigger the mount:
mount /mnt/pve/nas-proxmox
Or rescan via the UI: Datacenter → Storage → nas-proxmox → More → Rescan.
If your NAS boots slower than your Proxmox nodes, add _netdev to the mount options so the kernel waits for network availability before attempting the mount.
QCOW2 and LXC Container Root Filesystems
QCOW2 disk images on NFS work fine for KVM VMs, but LXC containers cannot use QCOW2 for their root filesystems — LXC needs raw block devices or directory-backed storage. If you want LXC container data on NFS, use the dir storage plugin pointing at the NFS mountpoint rather than the native nfs plugin. This distinction is worth knowing before you try to migrate a container and get an unexpected error.
ISO Upload Permissions
When uploading an ISO through the web UI, Proxmox writes it as www-data (UID 33). If you also mount the same NFS share from another client and try to delete or modify those files directly, you will hit permission errors. The clean rule: manage ISO files exclusively through the Proxmox UI or pvesm — do not mix access methods on the same export.
Conclusion
NFS is the lowest-friction way to give every Proxmox cluster node shared access to ISO libraries, container templates, and VZDump archives without deploying Ceph. Add the storage once at the Datacenter level, set no_root_squash on the export, and include vers=4.1,hard,nconnect=4 in your mount options for solid performance on modern hardware. The natural next step is setting up a scheduled VZDump job targeting this storage, then layering Proxmox Backup Server on top for deduplication and encryption.