Automated Backups with Proxmox Backup Server

Set up Proxmox Backup Server for automated VM and container backups. Covers installation, datastore setup, backup jobs, pruning, and verification schedules.

Proxmox Pulse Proxmox Pulse
13 min read
proxmox backup pbs disaster-recovery automation
Robotic arm transferring glowing data cube from server to reinforced backup vault

Why Not Just Use vzdump?

If you've been running Proxmox for any length of time, you've probably set up vzdump backup jobs through the PVE web UI. They work. But once you're managing more than a handful of VMs — or once you've stared at a 400GB backup file wondering why it takes four hours to back up a VM that only changed 200MB of data since yesterday — you start looking for something better.

Proxmox Backup Server (PBS) is that something better. The key difference comes down to three things:

Deduplication. PBS breaks backup data into variable-length chunks and deduplicates them across all backups in a datastore. I've seen a datastore with 14 VMs and 90 days of retention use about 1.2TB of actual disk space for what would have been over 8TB of vzdump .vma.zst files. That's not a typo.

Incremental backups. After the initial full backup, PBS only transfers changed chunks. A daily backup of a 100GB VM that changes maybe 2-3GB per day takes under a minute instead of 20+ minutes with vzdump. Network load drops dramatically.

Built-in verification. PBS can cryptographically verify that backup data is intact and restorable — something vzdump simply can't do. You find out your backup is corrupted when you try to restore it. With PBS, you find out during the scheduled verification job at 3am on a Tuesday, when you can actually do something about it.

Dedicated Machine or VM?

This is the first decision you'll face, and I have opinions.

Dedicated physical machine is the right answer for production. The whole point of backups is surviving hardware failure. If PBS runs as a VM on the same Proxmox host it's backing up, you've got a single point of failure that defeats the purpose. A used Dell Optiplex with a couple of large drives in ZFS mirror runs PBS beautifully and costs under $200.

PBS as a VM works fine for homelabs and dev environments. I ran it this way for about a year before moving to dedicated hardware. It's a perfectly reasonable starting point — just understand the limitation. If the host dies, your backups go with it unless you're syncing to a remote target.

For this guide, I'll assume a dedicated machine at 192.168.1.50.

Installing PBS

Grab the ISO from the Proxmox downloads page — you want "Proxmox Backup Server ISO Installer." As of PBS 3.3, the installer is straightforward.

Boot from USB, pick your target disk, set the timezone and password. The installer will set up the system and drop you at a login prompt. The web UI lives at:

https://192.168.1.50:8007

Note the port — 8007, not 8006 like PVE. Log in as root with the password you set during install.

First thing after install, update everything:

apt update && apt full-upgrade -y

If you don't have an enterprise subscription (most homelabbers don't), you'll want to switch to the no-subscription repository. Edit /etc/apt/sources.list.d/pbs-enterprise.list and comment out the enterprise line, then add:

echo "deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription" > /etc/apt/sources.list.d/pbs-no-subscription.list

Run apt update again after making that change.

Creating a Datastore

A datastore is where PBS actually stores backup data. You need at least one. The backing storage should ideally be a separate filesystem from the OS — ZFS is the natural choice here.

Assuming you have two drives (/dev/sdb and /dev/sdc) available for backup storage:

zpool create -o ashift=12 backup-pool mirror /dev/sdb /dev/sdc
zfs create backup-pool/pbs-store
zfs set compression=lz4 backup-pool/pbs-store
zfs set atime=off backup-pool/pbs-store

Now create the datastore in the PBS web UI: Datastore → Add Datastore

  • Name: main-store
  • Backing Path: /backup-pool/pbs-store
  • GC Schedule: Daily at 03:30 (the default is fine)

Or via CLI:

proxmox-backup-manager datastore create main-store /backup-pool/pbs-store

The datastore will initialize its chunk store structure. You'll see .chunks/ directories appear — this is where deduplicated data lives.

Storage Space Planning

Here's a rough calculation I use. Take the total allocated disk space across all VMs you plan to back up. Actual usage is usually 40-60% of allocated, and dedup ratio for typical Linux VMs hovers around 3:1 to 5:1 for reasonable retention periods.

For example: 10 VMs with 100GB allocated each = 1TB allocated. Assume 500GB actual usage. With 4:1 dedup over 30 days of retention, you need roughly 125-150GB of datastore space. In practice I'd plan for 2x that to have headroom, so about 300GB.

That said, these numbers vary wildly. Windows VMs deduplicate poorly. Database servers with large, frequently-changing tablespaces will eat more space than web frontends. Monitor your actual usage for the first couple weeks and adjust.

Adding PBS as Storage in PVE

On your Proxmox VE node (not PBS), go to Datacenter → Storage → Add → Proxmox Backup Server:

  • ID: pbs-main
  • Server: 192.168.1.50
  • Port: 8007
  • Username: backup@pbs (create a dedicated user — don't use root)
  • Password: the password for that user
  • Datastore: main-store
  • Fingerprint: grab this from PBS under Dashboard → Show Fingerprint

The fingerprint looks something like:

A2:B4:7C:91:...remaining hex...

You'll need it for the initial trust establishment. After adding, PVE will show pbs-main in the storage list. You should see it appear under each node's storage.

Creating a Dedicated Backup User

Don't back up using root. On the PBS side:

proxmox-backup-manager user create backup@pbs --comment "PVE backup user"
proxmox-backup-manager user update backup@pbs --password "YourStrongPasswordHere"

Then set permissions. The user needs DatastoreBackup role on the datastore:

proxmox-backup-manager acl update /datastore/main-store DatastoreBackup --auth-id backup@pbs

This gives the user permission to create and manage their own backups but not delete others' backups or modify datastore settings. Principle of least privilege.

Configuring Backup Jobs

Here's where it gets good. In PVE, go to Datacenter → Backup → Add.

I typically set up two backup jobs:

Daily Incremental — All VMs

  • Storage: pbs-main
  • Schedule: 01:00 (daily at 1 AM)
  • Selection mode: All (or exclude specific VMs by ID)
  • Mode: Snapshot
  • Compression: ZSTD
  • Mail to: admin@yourdomain.com
  • Mail notification: Always

This hits every VM and container nightly. Thanks to incremental backups, the second run onward will be fast. My cluster of 14 VMs completes the full nightly run in about 8 minutes.

Weekly Full — Critical VMs Only

For truly critical VMs (domain controllers, databases, config management), I add a second job:

  • Storage: pbs-main
  • Schedule: sun 03:00
  • Selection mode: Include specific VMs (100, 101, 105)
  • Mode: Stop (for maximum consistency on databases)
  • Compression: ZSTD

The stop mode causes a brief outage but guarantees filesystem consistency. For database VMs, this is worth the 30-second interruption at 3 AM on Sunday.

The Schedule Syntax

PBS and PVE use systemd calendar event syntax. Some useful patterns:

*-*-* 01:00:00        # Daily at 1 AM
mon..fri 23:00        # Weeknights at 11 PM
sat 02:00             # Saturday at 2 AM
*-*-01 04:00          # First of every month at 4 AM
mon,wed,fri 01:00     # MWF at 1 AM

I've found that staggering backup jobs by 15-30 minutes prevents I/O contention if you have multiple datastores or multiple PVE nodes backing up to the same PBS.

Pruning: The Retention Policy

Backups without pruning will eat your storage alive. PBS has a flexible pruning system with these retention parameters:

Parameter What It Keeps
keep-last The N most recent backups
keep-daily One backup per day for N days
keep-weekly One backup per week for N weeks
keep-monthly One backup per month for N months
keep-yearly One backup per year for N years

Here's the retention policy I've settled on after some trial and error:

keep-last: 3
keep-daily: 7
keep-weekly: 4
keep-monthly: 6
keep-yearly: 1

This translates to: always keep the 3 most recent snapshots, one per day for the last week, one per week for the last month, one per month for the last 6 months, and one per year. In practice, this means about 20-21 backup snapshots per VM at any given time.

Set this on the datastore under Datastore → main-store → Prune & GC → Prune Jobs → Add.

Schedule pruning to run after your backup window completes:

Schedule: 05:00 daily

Watch out for this gotcha: pruning marks chunks as unused, but garbage collection is what actually frees the disk space. GC runs separately — make sure it's scheduled too, typically 30-60 minutes after pruning.

Prune:  05:00
GC:     06:00

Checking Actual Savings

After a couple weeks of operation, check your dedup ratio:

proxmox-backup-manager datastore list
┌────────────┬───────────────────┬──────────────┬──────────────┬──────────────┐
│ name       │ path              │ used         │ avail        │ dedup-factor │
├────────────┼───────────────────┼──────────────┼──────────────┼──────────────┤
│ main-store │ /backup-pool/pbs  │ 247.83 GiB   │ 1.65 TiB     │ 3.72         │
└────────────┴───────────────────┴──────────────┴──────────────┴──────────────┘

A dedup factor of 3.72 means you're storing 3.72x more logical data than physical disk used. Anything above 2.0 is a win. I've seen ratios as high as 8.x on clusters running identical OS images.

Backup Verification

This is the feature that sold me on PBS. Unverified backups are Schrodinger's backups — they might be restorable, or they might not. You won't know until the worst possible moment.

PBS can verify backup integrity by reading every chunk referenced by a backup and checking its cryptographic checksum. Set up a verification job under Datastore → main-store → Verify Jobs → Add:

  • Schedule: sat 08:00 (weekly, during low-usage hours)
  • Check new: Verify only backups not yet verified
  • Outdated after: 30 days (re-verify if older than this)

The verification job will read through backup data and flag anything that doesn't match. You'll see the status in the datastore's content view — each backup gets a verification timestamp and status.

For critical environments, I schedule verification more aggressively — every 3 days with a 14-day outdated threshold. The I/O load is noticeable but manageable on modern hardware.

proxmox-backup-manager verify-job create weekly-verify main-store \
  --schedule "sat 08:00" \
  --outdated-after 30

Encryption for Off-Site Backups

If you're syncing backups off-site (and you should be), encryption is non-negotiable. PBS supports client-side encryption with AES-256-GCM. The key never leaves your PVE node.

Generate an encryption key on the PVE side:

proxmox-backup-client key create /etc/pve/priv/pbs-encryption-key.json

It will prompt for a password to protect the key file. Store this password somewhere safe — a password manager, a printed copy in a fire safe, whatever works for your threat model. If you lose this key and password, encrypted backups are irrecoverable. Full stop.

Add the key when configuring PBS storage in PVE:

Datacenter → Storage → pbs-main → Edit → Encryption Key: upload or paste the key.

After this, all new backups to that storage will be encrypted before leaving the PVE node. Existing unencrypted backups remain as-is — you'll want to run a new full backup to create encrypted copies.

Key Backup Strategy

I cannot stress this enough: back up your encryption key separately from your backups. I keep copies in:

  1. The PVE node itself (/etc/pve/priv/)
  2. A USB drive in a fire safe
  3. My password manager (the key JSON contents)
  4. A printed paper copy of the key and password

Paranoid? Maybe. But I've seen people lose encryption keys, and there is absolutely no recovery path.

Sync Jobs: The 3-2-1 Rule

The 3-2-1 backup rule says: 3 copies, 2 different media, 1 off-site. PBS-to-PBS sync jobs make the off-site part straightforward.

Let's say you have a second PBS at a remote site (could be a friend's house, a colo, a VPS with enough storage). On your local PBS, set up a sync job:

Datastore → main-store → Sync Jobs → Add

  • Remote: Add the remote PBS first under Configuration → Remotes
  • Remote Datastore: offsite-store
  • Schedule: 06:00 (after prune and GC complete)
  • Remove vanished: Yes (keeps remote in sync with local pruning)

The sync only transfers chunks that don't already exist on the remote side, so after the initial sync, daily transfers are small. Over a typical residential upload connection (10-20 Mbps), I've found nightly syncs for a small homelab complete well within a few hours.

proxmox-backup-manager remote create offsite \
  --host 203.0.113.50 \
  --port 8007 \
  --auth-id sync@pbs \
  --fingerprint "CF:5A:..."

proxmox-backup-manager sync-job create offsite-sync main-store \
  --remote offsite \
  --remote-store offsite-store \
  --schedule "06:00" \
  --remove-vanished true

Restore Testing

Backups you've never tested restoring are not backups. They're hopes. I schedule a manual restore test quarterly — pick a random VM, restore it to a temporary ID, verify it boots and the application works, then delete it.

Restoring from PBS in PVE:

  1. Go to Storage → pbs-main → Backups
  2. Select the VM backup snapshot you want
  3. Click Restore
  4. Target VM ID: use a temporary ID (9000+)
  5. Storage: pick your local storage
  6. Start the restored VM and verify

The restore speed from PBS is noticeably faster than from vzdump files, especially for large VMs. A 100GB VM restores in about 4-5 minutes from a local PBS over a 10Gbit link. Not bad at all.

For automated restore testing, you can script it:

#!/bin/bash
# restore-test.sh - Restore a VM for testing, then clean up
VMID=101
TEMP_VMID=9901
STORAGE="pbs-main"

# Find the latest backup
BACKUP=$(pvesh get /nodes/pve1/storage/$STORAGE/content \
  --vmid $VMID --output-format json | \
  jq -r 'sort_by(.ctime) | last | .volid')

echo "Restoring $BACKUP to VM $TEMP_VMID..."
qmrestore "$BACKUP" $TEMP_VMID --storage local-zfs

# Start and wait for boot
qm start $TEMP_VMID
sleep 60

# Basic health check
qm guest exec $TEMP_VMID -- systemctl is-system-running
STATUS=$?

# Cleanup
qm stop $TEMP_VMID
qm destroy $TEMP_VMID --purge

if [ $STATUS -eq 0 ]; then
  echo "Restore test PASSED for VM $VMID"
else
  echo "Restore test FAILED for VM $VMID" | mail -s "Backup Alert" admin@yourdomain.com
fi

Monitoring PBS Health

A few things to keep an eye on:

Datastore usage. Set up email notifications in PBS under Configuration → Notifications. You want alerts when usage crosses 70% and 85%.

Verify failures. Any failed verification is a red flag. Investigate immediately — it could be a bad disk sector, memory corruption during backup, or filesystem issues.

Sync job failures. Network issues between PBS nodes will cause sync failures. Check logs at:

journalctl -u proxmox-backup-proxy -f

GC status. If garbage collection takes longer and longer each run, your chunk store may be getting fragmented. Check with:

proxmox-backup-manager garbage-collection status main-store

Final Thoughts

PBS transformed my backup workflow from "I hope this works" to "I know this works." The deduplication alone pays for the effort of setting it up — my backup storage usage dropped by about 70% compared to vzdump files. Add in incremental transfers, verification, and sync jobs, and you've got a backup system that you can actually trust.

The one thing I'd emphasize: don't skip the verification jobs and don't skip restore testing. A backup system that's never been tested is just a very elaborate way to waste disk space. Set up verification, schedule quarterly restore tests, and sleep better at night.

Start with a single datastore and daily backups. Get comfortable with the workflow, monitor your dedup ratios and storage consumption for a few weeks, then layer on encryption, sync jobs, and more aggressive retention policies as your confidence grows.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →