Proxmox VM Storage Migration: Move Disks Between Pools
Learn how to move Proxmox VM disks and LXC containers between storage pools using GUI and CLI — including tips for live migration readiness.
On this page
Running out of space on your local storage? Upgrading to faster NVMe drives? Or maybe you're restructuring your homelab from local LVM storage to a proper ZFS pool — whatever the reason, moving VM disks and LXC containers between storage backends is one of those tasks every Proxmox admin eventually faces.
The good news is Proxmox makes this surprisingly straightforward, with both GUI and CLI options available. This guide covers every method you'll need: individual disk moves, full VM migrations, and container storage relocation.
Why You'd Need to Migrate Proxmox Storage
There are plenty of legitimate reasons to move VMs or disks between storage pools:
- Upgrading hardware — You've added a faster NVMe drive or a new ZFS pool and want to move workloads to benefit from it.
- Reorganizing storage tiers — Separating VMs by performance needs (fast NVMe for databases, slower HDD for archives).
- Enabling live migration — Live migration between nodes requires VMs to be on shared storage. You can't live-migrate from local storage.
- ZFS pool replacement — Rebuilding or replacing a degraded ZFS pool requires temporary migration.
- Running out of space — A storage pool is full and you need to redistribute VMs.
Understanding which method to use depends on whether you're moving individual disks, full VMs, or LXC containers.
Before You Start: Key Concepts
Storage Compatibility
Not all storage migrations are format-compatible. Proxmox handles format conversion in most cases, but it's worth knowing what you're working with:
- raw — Thin disk file, works everywhere
- qcow2 — QEMU Copy-On-Write, only for directory storage
- vmdk — VMware format, importable but not preferred
- subvol — ZFS dataset for LXC containers
- lvmthin — LVM-Thin logical volume
When moving from LVM-Thin to a ZFS pool, Proxmox automatically converts the format. The move operation handles this transparently.
Snapshot Considerations
This is the most common gotcha: if a VM disk has snapshots, you cannot move it using the standard move-disk method. You'll need to delete snapshots first, or use the backup-and-restore approach covered later.
Check for snapshots before attempting any migration:
qm listsnapshot <vmid>
Method 1: Move a VM Disk via the Web GUI
This is the easiest approach for moving individual disks, and works well for most single-disk VMs.
Steps
-
Shut down or pause the VM — For a clean move, stop the VM first. Proxmox does support moving disks on running VMs, but it requires the VM to briefly pause during the final stage.
-
Open the VM hardware tab — In the Proxmox web UI, click your VM, then Hardware.
-
Select the disk — Click the disk you want to move (e.g.,
scsi0,virtio0). -
Click Move Disk — This opens the move dialog with a dropdown for target storage and an option to delete the source after the move.
-
Choose your target storage — Select the destination storage pool. Only compatible backends will appear.
-
Enable Delete source — Unless you want to keep the original as a temporary backup, check this box to avoid wasted space.
-
Click Move disk — Proxmox starts the migration. Monitor progress in the task log at the bottom of the screen.
For a 50GB disk on NVMe-to-NVMe, expect the move to complete in under two minutes. Spinning disk to NVMe can take 10–30 minutes depending on size and utilization.
Method 2: Move VM Disks via CLI
The CLI approach gives you more control and is scriptable for batch operations.
Basic Syntax
qm move-disk <vmid> <disk> <storage> [OPTIONS]
Common Examples
Move the primary disk of VM 100 to the fast-nvme storage pool:
qm move-disk 100 scsi0 fast-nvme
Move the disk and delete the source after completion:
qm move-disk 100 scsi0 fast-nvme --delete 1
Move the disk and explicitly set the output format:
qm move-disk 100 scsi0 local-zfs --format raw --delete 1
Moving Multiple Disks
For VMs with multiple disks, you can loop over them automatically:
VMID=100
TARGET_STORAGE="fast-nvme"
for disk in $(qm config $VMID | grep -E '^(scsi|virtio|ide|sata)[0-9]+:' | awk -F: '{print $1}'); do echo "Moving $disk to $TARGET_STORAGE..." qm move-disk $VMID $disk $TARGET_STORAGE --delete 1 done
This extracts disk names from the VM config and moves each one. It won't match CD-ROM or cloud-init drives since those don't produce actual disk images.
Method 3: Move LXC Container Storage
LXC containers use pct move-volume instead of qm move-disk. The syntax is nearly identical.
Via CLI
Move the root filesystem of container 200 to local-zfs:
pct move-volume 200 rootfs local-zfs --delete 1
Move a mounted volume (e.g., mp0):
pct move-volume 200 mp0 local-zfs --delete 1
Via the GUI
Select your LXC container, go to Resources, click the rootfs or mount point, then click Move Volume. The dialog works the same as the VM disk move.
Important: Stop the container before moving its rootfs. Unlike VMs, LXC containers cannot have their root filesystem moved while running.
Method 4: Full VM Migration via Backup and Restore
When you need to move a VM with snapshots, or want a clean migration with a controlled downtime window, backup-and-restore is the most reliable approach.
Step 1: Back Up the VM
bash
vzdump
Example — back up VM 100 to the pbs-backups storage:
vzdump 100 --storage pbs-backups --compress zstd --mode snapshot
For a stopped VM (cleanest, no risk of inconsistent state):
vzdump 100 --storage pbs-backups --compress zstd --mode stop
Step 2: Restore to New Storage
Once the backup completes, restore it targeting the new storage pool:
qmrestore /var/lib/vz/dump/vzdump-qemu-100-2026_04_18-12_00_00.vma.zstd 101 --storage fast-nvme
For LXC containers, use pct restore:
pct restore 201 /var/lib/vz/dump/vzdump-lxc-200-2026_04_18-12_00_00.tar.zstd --storage fast-nvme
Step 3: Verify and Clean Up
Boot the new VM or container, verify everything works, then remove the original:
qm destroy 100
Method 5: Clone to a Different Storage Pool
Cloning creates a full independent copy of a VM on the target storage. This is useful when you want to keep the original running while setting up the migrated version.
qm clone <source-vmid> <new-vmid> --full --storage <target-storage> --name "migrated-vm"
Example:
qm clone 100 150 --full --storage local-zfs --name "webserver-zfs"
The --full flag creates a complete clone that's independent of the source. Without it, you get a linked clone that still references the original disk — not what you want for a permanent migration.
After verifying the clone boots correctly, shut down the original and destroy it.
Moving VMs to Shared Storage for Live Migration
If your goal is to enable live migration between Proxmox nodes, VMs must be on shared storage — Ceph, NFS, or iSCSI. Local ZFS does not support live migration.
Here's a practical workflow for moving a VM from local storage to NFS shared storage:
# Add NFS storage to Proxmox (if not already added)
pvesm add nfs shared-nfs --server 192.168.1.100 --export /mnt/proxmox --content images,rootdir
Move the VM disk
qm move-disk 100 scsi0 shared-nfs --delete 1
Verify the VM config references the new storage
qm config 100 | grep scsi0
After the move, that VM is eligible for live migration to any node that has the shared-nfs storage mounted and configured with the same name.
Verifying the Migration
After moving disks, always confirm the VM config references the correct storage:
qm config <vmid>
Your disk lines should reference the new storage:
scsi0: fast-nvme:vm-100-disk-0,size=50G
If the original storage still shows volumes for that VM, they weren't deleted. Clean them up via the GUI (Datacenter > Storage > [pool] > Content) or via CLI:
pvesm free <storage>:<volume-name>
Boot Test Checklist
Always boot the VM and do a quick sanity check after migration:
- VM boots without errors
- Disk shows the expected size (
df -hinside the VM) - No filesystem errors in
dmesg - Application services start correctly
Common Issues and Fixes
Cannot move disk: has snapshot(s)
Delete all snapshots for the VM first:
# List snapshots
qm listsnapshot <vmid>
Delete a specific snapshot
qm delsnapshot
Storage Not Appearing in the Move Dialog
The target storage must be configured with the images content type. Check it in Datacenter > Storage, select the pool, and enable Disk image if it's missing.
No Space Left on Device Mid-Migration
The move operation writes the full disk to the destination before deleting the source. Make sure the target has at least as much free space as the disk being moved — and add a buffer for thin-provisioned disks that are nearly full.
Disk Format Mismatch Errors
If you see format errors, explicitly set the target format:
qm move-disk 100 scsi0 local-zfs --format raw
ZFS storage uses raw format. Directory storage supports both raw and qcow2. LVM-Thin is raw only.
Conclusion
Proxmox gives you solid tooling for storage migration whether you prefer the GUI or command line. The move-disk command handles the vast majority of single-disk moves cleanly, while backup-and-restore is your fallback for anything with snapshots or complex configurations.
The key things to remember: check for snapshots before moving, ensure enough free space on the target, and always run a boot test after migration. For homelabs doing gradual hardware upgrades, moving VMs piecemeal with qm move-disk is the lowest-risk path — the migration is atomic, you can keep the VM running, and there's no lengthy downtime window to coordinate around.