Proxmox Backup Server: Replicate Backups Offsite

Learn how to replicate Proxmox Backup Server backups to a remote PBS instance or S3-compatible storage for true offsite protection and disaster recovery.

Proxmox Pulse Proxmox Pulse
12 min read
Data streams flowing from a server rack to a remote cloud node across a network.

Storing your Proxmox backups only on the same node — or even on a separate local NAS — is not a backup strategy. It's a single point of failure waiting to embarrass you. A house fire, a NIC that fries your ZFS pool, a ransomware payload that encrypts everything on the LAN: any of these turns your local-only backups into worthless data. The 3-2-1 rule exists for a reason: three copies, two media types, one offsite.

Proxmox Backup Server has first-class support for offsite replication through its Sync Jobs feature. You can push backup data to a remote PBS instance, or pull from one. Combined with PBS's built-in deduplication and encryption, offsite replication is both bandwidth-efficient and secure. This guide walks you through every step — from setting up trust between PBS nodes to scheduling automated sync jobs and verifying your offsite copies.

Why PBS Sync Jobs Beat Manual Rsync

Before PBS had sync jobs, the common approach was rsync over SSH or periodic zfs send to a remote host. Both work, but neither understands PBS's chunk-based datastore format.

PBS stores backups as deduplicated, fixed-size chunks referenced by a manifest. If you rsync the raw datastore directory, you copy everything — including chunks that may already exist on the remote side. You also bypass PBS's integrity checking, so you might replicate corrupted data without knowing.

PBS sync jobs are chunk-aware. They:

  • Only transfer chunks that don't already exist on the remote datastore
  • Verify chunk integrity during transfer
  • Preserve snapshot metadata and ownership
  • Support encryption (the remote side never sees plaintext if client-side encryption is used)
  • Run incrementally, so bandwidth usage drops sharply after the initial sync

The result: after your first full sync, ongoing jobs typically transfer only a fraction of the data.

Understanding the Two Sync Directions

PBS sync jobs can run in two modes:

Pull mode — The local PBS instance connects to a remote PBS and pulls backup data in. This is the most common setup. Your offsite PBS periodically reaches out to your local PBS and downloads new snapshots.

Push mode — Available since PBS 3.2, push mode lets your local PBS send backups directly to a remote. This is useful when your local node initiates all outbound connections (common in NAT or firewalled environments).

For most homelab setups, pull mode is simpler: you configure the offsite PBS with credentials to read your local PBS, and it handles the rest.

Prerequisites

You'll need:

  • A local PBS instance with at least one datastore containing backups
  • A remote PBS instance — this can be a VPS, a friend's homelab, a second location you control, or a cloud VM
  • Network connectivity between the two (direct, VPN, or WireGuard tunnel)
  • PBS version 2.4 or later on both sides for best compatibility (3.x recommended)

If you don't have a remote PBS yet, a cheap VPS with 500GB–1TB storage works well. Debian 12 with PBS installed takes about 15 minutes to set up.

Step 1: Install PBS on the Remote Node

If your remote server is running Debian 12, add the PBS repository and install:

# Add Proxmox repository key
curl -o /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg \
  https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg

Add no-subscription repo

echo "deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription" \

/etc/apt/sources.list.d/pbs-no-subscription.list

apt update && apt install -y proxmox-backup-server

After installation, access the PBS web UI at https://<remote-ip>:8007 and log in as root@pam.

Step 2: Create a Datastore on the Remote PBS

Before syncing, you need a datastore on the remote PBS to receive backups.

  1. In the remote PBS web UI, go to Datastore → Add Datastore
  2. Give it a name (e.g., offsite-replica)
  3. Select or create a directory path (e.g., /mnt/backup-disk/offsite-replica)
  4. Set garbage collection schedule — daily is fine
  5. Click Add

Make sure the underlying disk has enough space. PBS datastores benefit from thin-provisioned storage since deduplication keeps actual usage below the nominal backup size.

Step 3: Create a Remote User on the Local PBS

For pull-mode sync, the remote PBS needs read access to your local PBS datastore. Create a dedicated user with minimal permissions.

On your local PBS:

# Create sync user
proxmox-backup-manager user create sync-remote@pbs \
  --comment "Remote PBS sync user"

Set a strong password

proxmox-backup-manager user update sync-remote@pbs
--password 'YourStrongPasswordHere'

Then assign read-only permissions on the datastore:

# Grant DatastoreReader role on your local datastore
proxmox-backup-manager acl update /datastore/your-local-datastore \
  DatastoreReader \
  --auth-id sync-remote@pbs

Verify the ACL was applied:

proxmox-backup-manager acl list

You should see an entry showing sync-remote@pbs with DatastoreReader on /datastore/your-local-datastore.

Step 4: Add the Local PBS as a Remote on the Offsite Node

Now configure the remote PBS to know about your local PBS. In the remote PBS web UI:

  1. Go to Configuration → Remotes → Add
  2. Fill in the details:
    • ID: homelab-local (or any name)
    • Hostname: Your local PBS IP or hostname
    • Port: 8007 (default)
    • Username: sync-remote@pbs
    • Password: The password you set above
  3. Click Fetch Fingerprint — PBS will connect and retrieve the TLS certificate fingerprint
  4. Verify the fingerprint matches what you see on your local PBS under Configuration → Certificates
  5. Click Add

The fingerprint verification step is important. It prevents man-in-the-middle attacks on the sync connection.

You can also add a remote via CLI on the remote PBS:

proxmox-backup-manager remote add homelab-local \
  --host 192.168.1.10 \
  --userid sync-remote@pbs \
  --password 'YourStrongPasswordHere' \
  --fingerprint AA:BB:CC:... \
  --port 8007

Step 5: Create a Sync Job

With the remote configured, create the sync job on the remote PBS that will pull backups from your local PBS.

In the remote PBS web UI:

  1. Go to Datastore → offsite-replica → Sync Jobs → Add
  2. Configure the sync job:
    • ID: pull-from-homelab
    • Remote: homelab-local
    • Remote Store: your-local-datastore (the datastore on your local PBS)
    • Local Store: offsite-replica
    • Schedule: e.g., daily or 0 3 * * * (3 AM daily)
    • Remove Vanished: Enable this to delete snapshots from the remote that were pruned locally
    • Max Backups: Leave blank to sync all, or set a number to keep only the N most recent snapshots per group
  3. Click Add

The Remove Vanished option deserves attention. When enabled, snapshots deleted from your local PBS (via prune jobs) are also removed from the offsite copy after the next sync. This prevents unbounded growth on the remote side. Disable it if you want the offsite node to retain everything regardless of local pruning.

Step 6: Run the First Sync and Monitor

Trigger the first sync manually to verify everything works before relying on the schedule:

# On the remote PBS, run the sync job immediately
proxmox-backup-manager sync-job run pull-from-homelab

Or click Run Now in the web UI.

Watch the task log in real time:

  1. Go to Administration → Tasks
  2. Find the sync job task and click the log icon

A successful first sync looks like:

INFO: sync remote 'homelab-local' datastore 'your-local-datastore' INFO: processing group 'vm/100' INFO: snapshot vm/100/2026-03-27T02:00:00Z already exists INFO: snapshot vm/100/2026-03-28T02:00:00Z: adding INFO: transferred 4.2 GiB of chunks (12.8 GiB total, 67.2% deduplication) INFO: sync job finished successfully

The deduplication percentage will be low on the first run and climb sharply on subsequent runs as the chunk store builds up.

Step 7: Verify Backup Integrity on the Remote

Syncing data is only half the job. You need to verify the remote copies are actually intact. PBS has a built-in verify function:

# Verify all snapshots in the offsite datastore
proxmox-backup-client verify --repository root@pbs@localhost:offsite-replica

Or schedule a verification job in the web UI:

  1. Go to Datastore → offsite-replica → Verify Jobs → Add
  2. Set a schedule (weekly is reasonable)
  3. Enable Ignore Verified Snapshots to skip chunks already checked recently

Verification reads every chunk, checks its SHA-256 hash, and reports any corruption. Run it at least monthly.

Syncing to S3-Compatible Storage

If you don't have a second PBS node, you can replicate backups to an S3-compatible object store (Backblaze B2, Wasabi, Cloudflare R2, or self-hosted MinIO). PBS doesn't natively support S3 as a sync target, but you can mount an S3 bucket as a local filesystem using rclone mount or s3fs, then point a PBS datastore at it.

Mount Backblaze B2 with rclone

# Install rclone
apt install -y rclone fuse3

Configure B2 credentials

rclone config

Follow prompts: New remote → b2 → enter account ID and app key

Create mount point

mkdir -p /mnt/b2-backup

Mount (run as systemd service in production)

rclone mount b2:your-bucket-name /mnt/b2-backup
--vfs-cache-mode full
--vfs-cache-max-size 10G
--allow-non-empty
--daemon

Then create a PBS datastore pointing to /mnt/b2-backup/pbs-offsite. PBS will treat it like local storage, but writes flow to B2.

Important caveats for S3-backed datastores:

  • Performance is slower than local disk, especially for reads during restore
  • VFS cache is critical — without it, PBS's random-access chunk writes will be extremely slow
  • Egress costs apply when restoring — keep this in mind for cost planning
  • Stick to sync-then-forget: write backups, don't run garbage collection against S3 frequently

Create a Systemd Unit for the rclone Mount

# /etc/systemd/system/rclone-b2.service
[Unit]
Description=rclone B2 mount for PBS offsite
After=network-online.target
Wants=network-online.target

[Service] Type=notify ExecStart=/usr/bin/rclone mount b2:your-bucket-name /mnt/b2-backup
--vfs-cache-mode full
--vfs-cache-max-size 10G
--allow-non-empty ExecStop=/bin/fusermount -u /mnt/b2-backup Restart=on-failure RestartSec=10

[Install] WantedBy=multi-user.target

systemctl enable --now rclone-b2

Setting Up Client-Side Encryption Before Syncing

If you're syncing to a remote you don't fully control — a VPS, a friend's server, or cloud storage — encrypt your backups before they leave your network. PBS supports client-side encryption using AES-256.

Encryption is configured when the backup is created, not at sync time. If your backups are already encrypted on the local PBS, they'll arrive at the remote in encrypted form — the remote never has access to the key.

To enable encryption on a Proxmox VE node:

# Generate an encryption key
proxmox-backup-client key create /etc/pbs-encryption.key

Set the key as default for a user

proxmox-backup-client key change-passphrase /etc/pbs-encryption.key

In the Proxmox VE web UI, go to Datacenter → Storage → your PBS storage → Edit and set the encryption key path.

Store your encryption key somewhere completely separate from your PBS infrastructure — a password manager, a printed copy in a fireproof safe, or a hardware security key. Losing the key means losing your backups permanently.

Monitoring Sync Job Health

Don't just set up sync jobs and forget them. A sync job that silently fails for three months is worse than no sync job at all.

Check via CLI

# List all sync jobs and their last status
proxmox-backup-manager sync-job list

Output includes the last run time and status. Any status other than OK warrants investigation.

Email Notifications

Configure PBS to email you on task failures:

  1. In PBS web UI, go to Configuration → Notifications
  2. Add an SMTP endpoint (Gmail, Fastmail, your mail server, etc.)
  3. Create a notification matcher: Match on task type sync, condition failed
  4. Route to your email endpoint

Now any failed sync job triggers an immediate email. This is the minimum viable monitoring for a homelab offsite backup setup.

Prometheus Integration

If you're running Grafana and Prometheus (which you should be if you're serious about homelab monitoring), PBS exposes metrics at:

https://:8007/metrics

Add it as a scrape target in your prometheus.yml:

scrape_configs:
  - job_name: 'pbs-offsite'
    scheme: https
    tls_config:
      insecure_skip_verify: true  # Or add the PBS cert to your CA bundle
    static_configs:
      - targets: ['remote-pbs-host:8007']
    metrics_path: /metrics
    basic_auth:
      username: 'admin@pbs'
      password: 'your-password'

Metrics include datastore usage, task counts, and GC statistics — enough to build a meaningful dashboard.

Pruning Strategy for Offsite Backups

Your offsite PBS doesn't need to keep the same retention schedule as your local PBS. For offsite, longer retention makes sense:

Retention Local PBS Offsite PBS
Last N 3 7
Daily 7 30
Weekly 4 12
Monthly 3 12

Configure prune jobs on the offsite datastore to match your desired offsite retention:

# On the remote PBS
proxmox-backup-manager prune-job add \
  --store offsite-replica \
  --schedule daily \
  --keep-daily 30 \
  --keep-weekly 12 \
  --keep-monthly 12

Disable Remove Vanished on the sync job if you want the offsite copy to retain snapshots longer than the local copy. Enable it if you want both sides to stay in sync with the same retention.

Testing Your Restore Process

A backup you've never restored from is an untested hypothesis. Schedule quarterly restore tests:

  1. Pick a non-critical VM from your offsite PBS
  2. Restore it to a temporary storage location
  3. Boot it and verify the data looks correct
  4. Destroy the temporary restore

From the Proxmox VE node, you can restore directly from a remote PBS:

# List snapshots on the remote PBS
proxmox-backup-client list \
  --repository sync-remote@pbs@remote-pbs-host:offsite-replica

Restore a specific snapshot

qmrestore
pbs:sync-remote@pbs@remote-pbs-host:offsite-replica:vm/100/2026-03-28T02:00:00Z
101
--storage local-lvm

Document your restore procedure so you're not figuring it out under pressure during an actual incident.

Conclusion

Offsite backup replication with Proxmox Backup Server is genuinely straightforward once you understand the moving parts. The sync job system handles chunk-level deduplication automatically, which means after the first full sync your bandwidth requirements drop dramatically. A 10 Mbps upload connection can maintain an offsite replica of a moderately busy homelab without breaking a sweat.

The key things to get right: use a dedicated read-only user on your local PBS, verify the TLS fingerprint when adding the remote, enable email notifications so you know immediately if a sync fails, and actually test restoring from the offsite copy at least once. Client-side encryption before sync is strongly recommended if the remote node isn't one you physically control.

With a proper offsite strategy in place, you've moved from "I think I have backups" to "I have verified, encrypted, geographically separated backups" — which is a fundamentally different level of confidence when something eventually goes wrong.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →