LVM-Thin Pools on Proxmox for VM Snapshots Without ZFS
Set up LVM-thin pools on Proxmox VE 9.1 for copy-on-write snapshots without ZFS memory overhead. Works on any block device with no ECC RAM required.
On this page
LVM-thin provisioning gives you copy-on-write snapshots on virtually any block device — spinning rust, SATA SSD, or NVMe — without the ECC RAM requirement or memory overhead that ZFS demands. If you're running a homelab node with 16–32 GB of system RAM and need live snapshots for VMs and containers, LVM-thin is the right answer. By the end of this guide you'll have an LVM-thin pool configured as a Proxmox storage backend, know how to snapshot and roll back VMs in seconds, and understand exactly where this approach beats ZFS and where it falls short.
Key Takeaways
- No ECC tax: LVM-thin snapshots work on any block device with no special RAM requirements.
- Copy-on-write: Snapshots consume space only for changed blocks, not a full clone of the disk.
- Proxmox-native: LVM-thin is a first-class storage type in Proxmox VE 9.1 — no plugins or patches required.
- Snapshot chains: Performance degrades noticeably past 3–4 chained snapshots per volume; keep chains short.
- Best fit: Ideal for single-node homelabs, dedicated SSDs, and dev/test environments where ZFS overhead isn't justified.
Why Choose LVM-Thin Over ZFS?
ZFS is excellent for the right workload. But ZFS is memory-hungry by design — the ARC cache eats RAM aggressively, and on a node with 16 or 32 GB that's a real constraint when you also want to run ten or more VMs. ZFS also strongly prefers ECC RAM for its data integrity guarantees, and ECC-capable consumer motherboards cost meaningfully more.
LVM-thin sits at the other end of the spectrum. It's a Linux kernel feature (dm-thin), runs on any block device, uses almost no RAM overhead, and gives you the one ZFS feature most admins actually need day-to-day: copy-on-write snapshots.
Here's how the main Proxmox storage backends compare for VM workloads:
| Storage Type | Snapshots | Thin-Provisioned | RAM Overhead | Hardware Requirement |
|---|---|---|---|---|
| LVM (thick) | No | No | Minimal | Any block device |
| LVM-Thin | Yes (CoW) | Yes | Minimal | Any block device |
| ZFS | Yes (CoW) | Yes | High (ARC) | ECC RAM preferred |
| Directory (qcow2) | Yes (file) | Yes | Minimal | Any filesystem |
| Ceph (RBD) | Yes (CoW) | Yes | Moderate | 3+ nodes |
Directory storage with qcow2 also supports snapshots, but qcow2 performance degrades under concurrent I/O because the format serializes writes internally. LVM-thin avoids that — snapshots are tracked by the kernel block layer, and raw disk images maintain full sequential write speed.
Prerequisites and Disk Selection
You need an unformatted block device: a whole disk, a partition, or space on an existing PV. A dedicated SSD or NVMe is the right choice. Don't carve LVM-thin out of the same disk as your Proxmox OS root — contention from OS writes will hurt VM I/O latency under load.
For this guide I'll use /dev/sdb, a 500 GB SATA SSD added to an existing Proxmox VE 9.1 node. Adjust device paths to match your hardware. If you're still selecting hardware, How to Install Proxmox VE on Any Hardware covers what to look for in drives and whether consumer SSDs hold up in always-on roles.
Check the disk is clean before touching it:
lsblk -f /dev/sdb
wipefs -a /dev/sdb # Wipe leftover filesystem signatures if present
Step 1: Create the Physical Volume and Volume Group
pvcreate /dev/sdb
vgcreate vg-thin /dev/sdb
Verify:
pvs
vgs
Expected output from vgs:
VG #PV #LV #SN Attr VSize VFree
pve 1 17 0 wz--n- <476.94g <96.00g
vg-thin 1 0 0 wz--n- <465.76g <465.76g
The pve VG is your existing Proxmox install. vg-thin is the new one, ready for the pool.
Step 2: Create the Thin Pool Logical Volume
Allocate 95% of the VG to the pool and leave 5% unallocated for LVM metadata expansion. Thin pools need metadata headroom — running the VG to 100% causes hard I/O failures across every volume in the pool simultaneously.
lvcreate \
--type thin-pool \
--name pool0 \
--extents 95%VG \
vg-thin
Verify the result:
lvs -a vg-thin
You'll see the pool LV and its hidden metadata sibling ([pool0_tmeta]). That's expected — LVM manages metadata allocation internally and the brackets indicate a hidden helper volume.
Step 3: Register the Thin Pool in Proxmox Storage
You can do this via the web UI or directly with pvesm.
Web UI Method
- Open Datacenter → Storage → Add → LVM-Thin
- Set ID:
ssd-thin - Set Volume Group:
vg-thin - Set Thin Pool:
pool0 - Set Content:
Disk image, Container(addSnippetsif needed) - Click Add
CLI Method
pvesm add lvmthin ssd-thin \
--vgname vg-thin \
--thinpool pool0 \
--content images,rootdir
Verify the storage is active:
pvesm status
You should see ssd-thin listed with active status and the available capacity reported correctly.
Step 4: Create VMs and Containers on LVM-Thin
When creating a VM in the web UI, select ssd-thin from the storage dropdown for the disk. Via CLI:
qm create 200 \
--name test-vm \
--memory 2048 \
--cores 2 \
--net0 virtio,bridge=vmbr0
qm set 200 \
--scsi0 ssd-thin:32 \
--ide2 ssd-thin:cloudinit \
--boot order=scsi0
The ssd-thin:32 syntax allocates a 32 GB thin-provisioned volume. The pool doesn't pre-allocate 32 GB — it consumes actual disk space only as data is written. For LXC containers, the rootdir content type enables the same thin allocation for container root filesystems.
How to Take and Roll Back LVM-Thin Snapshots
A snapshot on LVM-thin is a new thin volume that shares blocks with the origin. When either volume writes to a block, the dm-thin kernel driver copies the original block before overwriting. No data is duplicated at snapshot time — only divergences accumulate going forward.
Take a snapshot of VM 200:
qm snapshot 200 pre-upgrade \
--description "Before kernel 6.12 upgrade" \
--vmstate 0
The --vmstate 0 flag skips saving RAM state, making the snapshot near-instant and much smaller. For an upgrade-and-rollback workflow, a disk-only snapshot is almost always sufficient — run sync inside the guest first to flush pending writes to disk.
List snapshots:
qm listsnapshot 200
Rollback:
qm rollback 200 pre-upgrade
Rollback is instant regardless of how much data changed between snapshot and rollback. The thin pool reassigns block mappings without moving any data.
Monitoring Pool Usage Before It Causes Problems
A full thin pool is a hard failure — all volumes go read-only at once. Monitor usage proactively:
lvs -o +data_percent,metadata_percent vg-thin/pool0
Sample output:
LV VG Attr LSize Pool Origin Data% Meta%
pool0 vg-thin twi-aotz-- 440.00g 23.47 1.82
Configure LVM autoextend in /etc/lvm/lvm.conf as a safety net:
activation {
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 20
}
This grows the pool by 20% when it hits 80% full — provided unallocated space exists in the VG. That's exactly why we left the 5% reserve during pool creation.
Gotcha from the field: If you snapshot frequently and then delete the parent volumes without removing the snapshots first, the metadata volume grows faster than the data volume. Watch Meta% separately; the metadata pool is much smaller and will surprise you at an inconvenient time.
Using LVM-Thin With Proxmox Backup Server
LVM-thin and Proxmox Backup Server integrate cleanly. PBS uses its own change-block-tracking (dirty-bitmap) mechanism for incremental backups, independent of LVM snapshots — it doesn't consume or require LVM snapshots internally. You can chain them yourself: take an LVM-thin snapshot before a PBS backup run to guarantee a consistent source while the VM continues running.
For backup scheduling and retention policy configuration, Automated Backups with Proxmox Backup Server walks through the full PBS setup — everything there applies equally to LVM-thin-backed VMs and containers.
When LVM-Thin Is Not the Right Choice
LVM-thin is not a silver bullet. Here's where I'd choose something else:
- Silent corruption protection: ZFS checksums catch bitrot during scrubs; LVM-thin does not checksum data. For NAS workloads or long-lived archival data, ZFS wins.
- Multi-node shared storage: LVM-thin is strictly local to one node. For clusters requiring live migration with shared disk, Ceph RBD is the correct backend.
- Deep snapshot chains: LVM-thin degrades past 3–4 chained snapshots per volume. ZFS handles deep chains more gracefully, and qcow2 can technically go deeper too.
- High-RAM ECC servers: If you have 64+ GB ECC RAM and production workloads, ZFS overhead amortizes well and you gain checksumming plus native compression.
For homelab nodes where RAM is limited and snapshot capability matters more than byte-level integrity, LVM-thin is the correct default.
How to Migrate Existing VMs to LVM-Thin
If you have VMs on directory storage or thick LVM and want to move them to the thin pool, Proxmox handles it live without stopping the VM:
qm move-disk 100 scsi0 ssd-thin --delete 1
The --delete 1 flag removes the source disk after the move completes successfully. Expect the move to finish in under two minutes for a 50 GB disk on NVMe-to-NVMe; SATA-to-SATA will be closer to five minutes for the same size. Proxmox uses an internal mirroring approach — the VM stays online throughout.
If you're building out a broader homelab architecture with multiple storage tiers, Build a Private Cloud at Home with Proxmox VE covers how LVM-thin fits alongside ZFS and Ceph on multi-role nodes.
Conclusion
LVM-thin is the practical middle ground for Proxmox storage: copy-on-write snapshots, thin allocation, and solid I/O performance with no RAM overhead and no ECC requirement. Set it up on a dedicated SSD, register it in Proxmox as a storage backend, and you have a snapshot-capable layer that works on commodity hardware. Next step: configure Proxmox Backup Server to target this pool and add scheduled retention-based backups so your LVM-thin VMs are protected automatically.