Proxmox VE 8 to 9 Upgrade: Step-by-Step Guide

Safely upgrade Proxmox VE 8 (Debian 12) to PVE 9 (Debian 13) with pre-flight checks, package source migration, cluster upgrade strategy, and rollback options.

Proxmox Pulse Proxmox Pulse
11 min read
Two server racks connected by a radiant energy beam representing a system upgrade migration

Proxmox VE 9 is out, and if you've been running a stable PVE 8 cluster, the upgrade question is inevitable. The good news: Proxmox provides an official, well-tested upgrade path from PVE 8 (Debian 12 Bookworm) to PVE 9 (Debian 13 Trixie). The less good news: it's a major version jump that demands careful preparation if you want zero downtime and no surprises.

This guide walks you through every step — from pre-upgrade checks to verifying that your cluster is healthy after reboot. Follow it in order and the upgrade is uneventful. Skip steps and you're asking for trouble.

What's New in Proxmox VE 9

Before diving into the upgrade process, here's what you're gaining — and why it's worth the effort:

  • Debian 13 (Trixie) base — updated glibc, newer userspace tools, longer support lifecycle
  • Kernel 6.8+ — improved hardware support, better scheduler, enhanced PCIe power management
  • QEMU 9.x — performance improvements for Windows VMs, better virtio-gpu and USB-C emulation
  • OCI-based LXC — run any Docker/OCI container image directly as an LXC container (no Docker daemon required)
  • SDN improvements — EVPN route leaking, improved DHCP integration, better multi-zone routing
  • nftables firewall — the Proxmox firewall migrates from iptables to nftables for better performance and rule management
  • Ceph Squid — updated Ceph release for hyper-converged storage setups

For homelab users, the OCI LXC feature alone is a compelling reason to upgrade. For production clusters, the kernel and QEMU updates translate to real performance improvements.

Before You Begin: Pre-Flight Checklist

Never start a major OS upgrade without completing every item on this list. Each one exists because someone, somewhere, had a bad day skipping it.

1. Take Full Backups of Everything

This is non-negotiable. Back up every VM and container before touching a single package.

# Backup all VMs and containers to Proxmox Backup Server
vzdump --all 1 --compress zstd --storage pbs-backup

# Or back up to local storage
vzdump --all 1 --compress zstd --storage local

If you're on PBS, verify the backups are accessible and not corrupted:

proxmox-backup-client list

Do not proceed until you have confirmed backups. This is your only real rollback option.

2. Fully Update Proxmox VE 8 First

You must be on the latest PVE 8.x release before upgrading. Attempting the jump from an outdated 8.x is a known source of issues.

apt update && apt full-upgrade -y
reboot

After reboot, verify your current version:

pveversion
# Example output: pve-manager/8.4.1/...

If you're not on the latest 8.x patch release, do not continue until you are.

3. Run the Official pve8to9 Upgrade Checker

Proxmox ships an official readiness script that catches the most common upgrade blockers before you change anything.

apt install pve8to9
pve8to9

The checker scans for:

  • Incompatible BIOS/UEFI configurations
  • Legacy storage formats that won't work in PVE 9
  • Corosync version issues in clusters
  • Third-party repositories that may conflict with Trixie packages
  • Deprecated configuration options in Proxmox config files

Do not proceed if the checker reports any ERRORS. Warnings are acceptable if you understand the implications — errors are blockers that must be resolved first.

4. Audit Third-Party Repositories

Any non-Proxmox apt repository that doesn't have Debian 13 (Trixie) packages will cause failures during the upgrade.

# List all active repositories
find /etc/apt/sources.list.d/ -name "*.list" | xargs grep -l "^deb"
cat /etc/apt/sources.list

For each third-party repo, check whether the vendor supports Debian 13. If they don't yet, comment it out before upgrading:

# Disable a third-party repo temporarily
sed -i 's/^deb /# deb /' /etc/apt/sources.list.d/third-party.list

You can re-enable it after upgrading once the vendor publishes Trixie packages.

5. Check Cluster Health (Clusters Only)

If you're running a multi-node cluster, verify all nodes are online and the cluster is healthy before starting:

pvecm status
# All nodes should show: online
# Quorum should show as achieved

If any node is degraded or unreachable, fix that first. Upgrading with a degraded cluster risks losing quorum during the process.

Critical rule for clusters: upgrade one node at a time. Never run the upgrade on multiple nodes simultaneously.

Step 1: Switch Package Sources to Proxmox VE 9

The upgrade works by pointing apt at the Trixie (Debian 13) repositories instead of Bookworm (Debian 12). This is the first irreversible step — make sure your backups are done.

Update /etc/apt/sources.list

# Back up existing sources first
cp /etc/apt/sources.list /etc/apt/sources.list.bak

# Open for editing
nano /etc/apt/sources.list

Replace the Bookworm lines with Trixie:

deb http://deb.debian.org/debian trixie main contrib
deb http://deb.debian.org/debian trixie-updates main contrib
deb http://security.debian.org/debian-security trixie-security main contrib

Update the Proxmox VE Repository

nano /etc/apt/sources.list.d/pve-install-repo.list

For the no-subscription repository, change from:

deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription

To:

deb [arch=amd64] http://download.proxmox.com/debian/pve trixie pve-no-subscription

If you have a paid subscription (enterprise repository):

nano /etc/apt/sources.list.d/pve-enterprise.list
# Change bookworm → trixie on the deb line

Update the Ceph Repository (If Applicable)

If you use Ceph, update it to the Squid release on Trixie:

nano /etc/apt/sources.list.d/ceph.list
deb http://download.proxmox.com/debian/ceph-squid trixie no-subscription

Step 2: Run the Distribution Upgrade

With sources updated, run the full upgrade. This step downloads and installs a significant number of packages — expect it to take 15–45 minutes depending on your connection speed.

apt update
apt dist-upgrade -y

During the upgrade, you may be prompted about configuration files with changes. The standard advice is:

  • For Proxmox-managed config files (anything in /etc/pve/): keep the existing version
  • For general Debian configs: review the diff and decide — keeping existing is usually safe

Watch for error messages. Common ones you might see:

"Package X has no installation candidate" — A package doesn't exist in Trixie. If it's a third-party package, disable that repository and retry apt dist-upgrade.

DKMS build failures — Out-of-tree kernel modules (GPU drivers, custom NIC drivers) often fail to build against the new kernel. This is normal — they'll be rebuilt after the reboot once the new kernel headers are in place.

Dependency conflicts — Usually resolved by apt dist-upgrade automatically. If not, apt -f install often clears them.

Step 3: Reboot into the New Kernel

Once apt dist-upgrade completes successfully, reboot the node:

reboot

During boot, the GRUB menu should show the new Proxmox kernel (6.8.x or newer) as the default entry. After reboot, verify:

uname -r
# Expected: 6.8.x-x-pve (or newer)

pveversion
# Expected: pve-manager/9.x.x/...

If the kernel version is still showing the old 6.5.x PVE 8 kernel, the upgrade may not have completed fully. Check /boot/ for available kernel images:

ls /boot/vmlinuz-*

And manually update the GRUB default if needed.

Step 4: Post-Upgrade Verification

Don't declare victory until you've verified the critical services and your VM workloads.

Verify Core Services

systemctl status pveproxy pvedaemon pvestatd corosync

All should show active (running). If any are failed:

journalctl -u pveproxy -n 50
# Review logs for the specific error

Check Cluster Status (Clusters Only)

pvecm status
# All nodes should be online
# Quorum should be achieved

Verify Storage Pools

pvesm status

All storage should show active. If you use ZFS, check pool health explicitly:

zpool status
# All pools should show: state: ONLINE

ZFS pools survive the upgrade intact — Proxmox VE 9 ships with a newer ZFS version that reads your existing pools without any migration step needed.

Test VM and Container Operations

Start a VM that was stopped during the upgrade and verify it boots:

qm start 100
qm status 100
# Should show: status: running

For containers:

pct start 200
pct status 200

Also verify that live migration still works if you're in a cluster — migrate a test VM to another node and back.

Check the Web UI

Log into the Proxmox web interface and confirm:

  • The version in the footer shows PVE 9.x
  • All VMs and containers are visible
  • Storage pools and backup jobs appear correctly
  • No persistent red error banners in the cluster summary view

Upgrading a Cluster: The Node-by-Node Strategy

For multi-node clusters, the upgrade requires a methodical approach. The goal is to keep workloads running throughout the entire process using live migration.

The Upgrade Sequence:

  1. Migrate all VMs off Node 1 to other nodes in the cluster
  2. Upgrade Node 1 following all steps above
  3. Reboot Node 1 and wait for it to rejoin the cluster
  4. Verify Node 1 is healthy in pvecm status
  5. Optionally migrate some VMs back to Node 1 to balance load
  6. Repeat for Node 2, Node 3, and so on

Live migrate VMs before taking the node down for upgrade:

# Migrate VM 100 to node2 with zero downtime
qm migrate 100 node2 --online

# Migrate container 200 to node2
pct migrate 200 node2 --online

Live migration between PVE 8 and PVE 9 nodes is supported for standard KVM VMs. Containers with certain configurations may require a shutdown migration — check the task log for any warnings.

Do not upgrade the next node until the previous one has fully rejoined the cluster and you've verified it's healthy. Upgrading two nodes simultaneously risks dropping below quorum and freezing cluster operations.

Rollback Options If Things Go Wrong

If the upgrade fails or your environment is broken after reboot, you have a few options depending on how your Proxmox root filesystem is configured.

Option 1: Restore VMs from Proxmox Backup Server

If the VMs are intact but Proxmox itself is broken, restore individual VMs and containers from your pre-upgrade PBS backups. This doesn't roll back the OS, but it recovers your workloads.

Option 2: ZFS Boot Environment (ZFS Root Users)

If your Proxmox root disk is on ZFS (a common install option), you likely have a snapshot from before the upgrade that you can boot:

# Check for existing root snapshots
zfs list -t snapshot | grep rpool/ROOT

If snapshots exist, you can boot from a previous ZFS dataset by selecting it in the GRUB/ZFS boot menu at startup. This effectively rolls back the entire OS to its pre-upgrade state.

Option 3: Full Reinstall from Scratch

In the worst case, reinstall Proxmox VE 9 fresh from the ISO and restore all VMs from your backup. This is exactly why the pre-upgrade backup step is treated as a hard requirement, not a suggestion.

Common Post-Upgrade Issues and Quick Fixes

Web UI won't load or shows stale content:

systemctl restart pveproxy

VM console (SPICE/VNC) fails to connect:

systemctl restart spiceproxy

DKMS modules failed to build (GPU passthrough users take note):

dkms autoinstall
reboot

If a specific module fails, identify the package and reinstall after the new kernel headers are confirmed present:

apt install --reinstall <dkms-package-name>

nftables firewall rules not matching expectations:

PVE 9 migrates the Proxmox firewall from iptables to nftables. If you have custom iptables rules in /etc/network/interfaces post-up scripts, audit them:

# Check current nftables ruleset
nft list ruleset

# Check if legacy iptables rules are being ignored
iptables -L

Custom rules may need to be rewritten in nftables syntax if they were using iptables-specific features.

Conclusion

Upgrading from Proxmox VE 8 to 9 is a standard Debian distribution upgrade wrapped in Proxmox's tooling — change your package sources, run apt dist-upgrade, reboot. The official pve8to9 checker is what makes this upgrade feel manageable, because it surfaces real blockers before you change anything.

The keys to a clean upgrade are the same as any critical system change: full backups before you start, running the readiness checker, updating sources carefully, and — for clusters — migrating workloads off each node before upgrading it. Do those things and you'll be running Proxmox VE 9 with its new kernel, QEMU 9.x, and OCI LXC support before the end of the day.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →