Proxmox Backup Server 4.2 S3 Storage Backend Setup
Configure PBS 4.2's S3 storage backend to sync backups to Backblaze B2, Wasabi, or Cloudflare R2. Covers sync jobs, retention propagation, encryption, and cost per TB.
On this page
Proxmox Backup Server 4.2 adds native S3-compatible object storage as a remote sync target, eliminating the need for a second PBS instance just to get backups off-site. Configure a remote once, point it at Backblaze B2, Wasabi, Cloudflare R2, or your own MinIO, and PBS handles chunk-level sync with full deduplication awareness. By the end of this guide you will have your backups replicating to object storage on a schedule with retention enforced at the S3 end, no extra hardware required.
Key Takeaways
- New remote type: PBS 4.2 introduces a native
S3backend under Remotes — no relay server needed - Chunk-aware sync: Only new 4 MB chunks transfer after the first run; unchanged data never re-uploads
- Any S3-compatible endpoint: Backblaze B2, Wasabi, Cloudflare R2, MinIO, and AWS S3 all work with the same config
- Storage overhead: Budget 15-25% more S3 usage than your local datastore due to manifests and index metadata
- Cost: 1 TB of offsite retention runs $6-7/month on B2 or Wasabi; R2 charges zero egress fees
S3 Sync vs Running a Second PBS Instance
Before PBS 4.2, the standard offsite approach was pulling backups to a second PBS node via the built-in replication protocol. That works well — but it means a second machine, potentially a second enterprise subscription, and another piece of infrastructure to patch and monitor.
S3 sync trades hardware cost for a monthly per-GB fee. Whether that is the right tradeoff depends on your datastore size:
| Second PBS Node | S3 Sync (PBS 4.2) | |
|---|---|---|
| Hardware cost | $150–500+ one-time | $0 |
| Monthly operating cost | Electricity (~$10–15) | $6–7/TB |
| Restore speed | Full PBS API, fast | Pull chunks from S3 first |
| Deduplication awareness | Full (native) | Chunk-level (native in 4.2) |
| Disaster recovery | Needs second machine up | S3 is always available |
| Break-even point | ~2–3 TB stored | Under 2 TB stored |
For homelab setups under 2 TB — and any small-business scenario where the second PBS machine sits idle most of the time — S3 sync is cheaper and simpler. Above 2 TB, the monthly S3 cost starts approaching the electricity cost of a dedicated machine.
If you are setting up PBS for the first time, the Automated Backups with Proxmox Backup Server guide covers the datastore and backup job fundamentals before you layer on S3 sync.
Prerequisites
You will need:
- PBS 4.2 or later — run
proxmox-backup-manager versionto check; if on 4.1 or earlier, upgrade first:
apt update && apt full-upgrade
- An account with Backblaze B2, Wasabi, Cloudflare R2, MinIO, or AWS S3
- A bucket created in your chosen provider (covered below)
- Network access from your PBS host to the S3 endpoint — no NAT hairpins, no intercepting proxies without
HTTP_PROXYconfigured
PBS 4.2 requires Debian 12 Bookworm as the base OS. If your PBS runs on Bullseye, the upgrade path requires a full OS upgrade before you can reach PBS 4.2.
Creating a Bucket and Access Keys
Backblaze B2
B2 is the most common homelab choice at $6/TB/month with no minimum storage term and free egress up to 3x your stored data per day.
- Log into your B2 account and go to Buckets → Create a Bucket
- Name the bucket (e.g.,
pbs-offsite-2026) — this string appears in your endpoint URL - Set Files in Bucket to Private
- Go to App Keys → Add a New Application Key
- Scope it to your bucket, enable Read and Write, and save the
keyIDandapplicationKey
B2's S3-compatible endpoint format — find your exact region on the bucket detail page:
https://s3.us-west-004.backblazeb2.com
Cloudflare R2
R2 charges zero egress fees, making it the right pick if you do frequent restores from S3. The first 10 GB of storage per month is free.
- In the Cloudflare dashboard go to R2 → Create bucket
- Choose a location hint near your PBS host
- Go to R2 → Manage R2 API Tokens → Create API Token, grant Object Read and Write scoped to your bucket
- Note your Account ID from the R2 overview page
R2 endpoint format — substitute your 32-character account ID:
https://<ACCOUNT_ID>.r2.cloudflarestorage.com
Wasabi
Wasabi matches B2 at $6.99/TB/month but enforces a 90-day minimum storage policy. Deleting objects stored less than 90 days still bills for the full 90 days. Do not use Wasabi with prune schedules shorter than 90 days or you will pay for data you no longer hold.
Wasabi regional endpoint format:
https://s3.us-east-1.wasabisys.com
How to Add an S3 Remote in PBS 4.2
Via the Web UI
- Open the PBS web UI at
https://<pbs-ip>:8007 - Navigate to Configuration → Remotes
- Click Add and select Type: S3
- Fill in the fields:
- ID: a short name (e.g.,
b2-offsite) - Endpoint: your full S3 endpoint URL
- Bucket: your bucket name
- Region: the bucket's region string
- Access Key: your
keyID - Secret Key: your
applicationKey
- ID: a short name (e.g.,
- Click Test Connection — PBS runs a list operation against the bucket and surfaces auth errors immediately before you save
Via CLI
proxmox-backup-manager remote create b2-offsite \
--type s3 \
--endpoint "https://s3.us-west-004.backblazeb2.com" \
--bucket "pbs-offsite-2026" \
--region "us-west-004" \
--access-key "your-keyID-here" \
--secret-key "your-applicationKey-here"
Verify the remote saved correctly:
proxmox-backup-manager remote list
Expected output:
┌─────────────┬──────┬───────────────────────────────────────────────┐
│ name │ type │ endpoint │
╞═════════════╪══════╪═══════════════════════════════════════════════╡
│ b2-offsite │ s3 │ https://s3.us-west-004.backblazeb2.com │
└─────────────┴──────┴───────────────────────────────────────────────┘
Setting Up Sync Jobs and Retention
Creating the Sync Job
Via the web UI:
- Go to Datastore → <your-datastore> → Sync Jobs
- Click Add Sync Job
- Set Remote to your S3 remote
- Remote Store: the namespace path to use in the S3 bucket — use your datastore name (e.g.,
backups) - Schedule:
dailyor a cron expression like0 2 * * *for 2 AM daily - Remove Vanished: enable this — it deletes S3 objects for snapshots that have been pruned locally
Via CLI:
proxmox-backup-manager sync-job create daily-s3-sync \
--store backups \
--remote b2-offsite \
--remote-store backups \
--schedule "0 2 * * *" \
--remove-vanished true
Trigger the first sync manually before relying on the schedule:
proxmox-backup-manager sync-job run daily-s3-sync
Watch progress under Administration → Task History. A 500 GB datastore over a 100 Mbit uplink expects the initial upload to complete in 2–4 hours. Incremental syncs after that transfer only new chunks — typically 2–8 GB/day for a homelab with 3–4 VMs running daily backups.
How Retention Propagates to S3
PBS does not enforce retention directly on S3. Pruning happens locally first, then the remove-vanished flag propagates deletions on the next sync cycle. The sequence:
- Local prune job runs and removes old snapshots from the local datastore
- Next sync job runs with
remove-vanished: true - PBS compares the S3 object list against local state and deletes orphaned chunk files
Your S3 bucket will lag local retention by up to one sync cycle — 24 hours at a daily schedule. On Wasabi specifically: a snapshot pruned at day 89 still triggers the full 90-day minimum charge because Wasabi sees an early deletion. Set your Wasabi prune schedule to keep at least 90 days minimum.
What PBS Actually Uploads to S3
This is where the most common misconception lives. PBS stores backups as content-addressed 4 MB chunk files. When syncing to S3, it uploads:
- Chunk files (
.blob) — deduplicated backup data, already compressed - Snapshot manifests (
.manifest) — links each snapshot to its chunk list - Index files (
.fidx,.didx) — mapping tables for file-level and block-device backups
It does not sync the task log or the searchable catalog. This means you can restore data from S3 on a fresh PBS instance by registering the S3 remote as a source datastore, but you will need to rebuild the catalog afterward:
proxmox-backup-client catalog rebuild --repository <user>@<pbs-host>:backups
The 15–25% storage overhead estimate comes from these metadata files. A local datastore holding 800 GB of deduplicated backup data will occupy roughly 920 GB to 1 TB on S3.
Provider Cost Comparison
| Provider | Storage/TB/month | Egress | Min. term | Best for |
|---|---|---|---|---|
| Backblaze B2 | $6.00 | Free (3x stored/day) | None | Low-restore-frequency homelabs |
| Wasabi | $6.99 | Free | 90 days | Long-retention cold storage |
| Cloudflare R2 | $15.00 | Free | None | Frequent restores, zero surprise bills |
| MinIO (self-hosted) | Hardware only | Free | None | Air-gapped or LAN backup copies |
| AWS S3 Standard | $23.00 | $0.09/TB | None | Avoid for homelab-scale volumes |
For a homelab with 2 TB stored, daily incrementals, and 30-day retention: B2 costs ~$12/month, Wasabi ~$14/month, R2 ~$30/month. R2's zero-egress advantage only makes financial sense if you are pulling hundreds of GB in restores per month. Most homelabs run few enough restores that B2 wins on total cost.
Securing Credentials and Encrypting the Datastore
PBS stores S3 credentials in /etc/proxmox-backup/remotes.cfg, readable only by root. For most setups that is sufficient. If your PBS runs as a VM on a shared Proxmox host with other administrators, enabling datastore-level encryption means data pushed to S3 is client-side encrypted before it leaves your network — your S3 provider stores only ciphertext.
Enable encryption in PBS: Datastore → <name> → Encryption → Generate Key. Store the key file somewhere other than the PBS host itself — a password manager or a dedicated secrets store. Lose the key and your S3 backups become permanently unrecoverable, so treat it with the same discipline as your SSH private keys.
The Build a Private Cloud at Home with Proxmox VE guide covers the broader architecture behind layered backup strategies on Proxmox, including where PBS fits alongside Proxmox's native VM snapshot tooling. For the host-level hardening that protects PBS credentials in the first place, the Hardening Proxmox VE: Firewall, fail2ban, and SSH Security guide is the companion read.
Troubleshooting Common Sync Errors
AuthorizationHeaderMalformed
The region string does not match what the provider expects. B2 regions look like us-west-004, not us-west-1. Copy the exact region string directly from the bucket detail page in the B2 console.
403 Forbidden on test connection
Your API key lacks list permission on the bucket. For B2, verify the application key includes listBuckets, readFiles, and writeFiles capabilities. For R2, confirm the API token is scoped to the correct bucket.
Sync shows 0 bytes transferred Normal if no new backups have been created since the last sync. PBS only transfers chunks absent from the remote. Confirm backups are running:
proxmox-backup-client snapshots --repository <user>@<pbs-host>:backups
Initial sync is stuck at very low throughput Check for a rate limit under Configuration → Bandwidth Limits in the PBS web UI. Also check whether B2's free API tier cap (2,500 Class B operations per day) has been hit — the client applies exponential backoff when rate-limited. For large datastores, upgrade to a paid B2 API tier or spread the initial sync over several days using bandwidth throttling.
Conclusion
PBS 4.2's S3 backend turns any S3-compatible bucket into a fault-tolerant off-site backup copy for $6–7/month per TB with no additional hardware. Set up the remote, create a daily sync job with remove-vanished enabled, and your local prune policy propagates to S3 automatically. The step most people skip: do a test restore from S3 before you need it — register your S3 bucket as a source on a temporary PBS instance or spare VM, browse the snapshots, and pull one back. That is the only confirmation that your offsite copy actually works.