Proxmox VE 9: New Features Every Admin Should Enable

Proxmox VE 9 ships with major SDN upgrades, OCI LXC support, and UI improvements. Here's what to enable first and how to configure each feature.

Proxmox Pulse Proxmox Pulse
9 min read
Holographic admin dashboard with glowing feature toggles activating in a dark server room.

Proxmox VE 9 landed with more substance than a typical point release. Between revamped SDN networking, native OCI-based LXC container deployment, and a long list of quality-of-life improvements, there's a lot worth enabling — but most admins upgrade and then never dig into what changed. This guide cuts through the release notes and focuses on the features that will actually improve your day-to-day workflow, with practical steps to get each one working.

What's New in Proxmox VE 9 at a Glance

Before diving into configuration, here's a quick summary of the headline changes:

  • OCI-based LXC deployment — pull and run any OCI container image as an LXC container
  • SDN improvements — simpler VXLAN zones, improved EVPN support, and a new BGP zone type
  • QEMU 9.2 and Linux kernel 6.11 — better hardware support and VM performance
  • Improved UI — bulk actions, better task logging, and a refreshed summary dashboard
  • Ceph Squid — updated Ceph release with improved performance and new features
  • Proxmox Datacenter Manager integration — early multi-cluster management support

Not everything requires manual activation, but several of these features are off by default or need a few configuration steps to unlock their full potential.

OCI-Based LXC Containers

This is the most significant new capability in VE 9.1+. Instead of being limited to Proxmox-curated LXC templates, you can now pull and run any OCI-compatible container image directly as an LXC container. Think Docker Hub images running as native LXC — with all the performance and integration benefits that brings.

How to Enable OCI LXC

First, make sure you're on Proxmox VE 9.1 or later. Check your version:

bash pveversion

To create an OCI-based LXC container via the CLI, use pct create with the new --ostype unmanaged flag and an OCI image URL:

pct create 200 \
  --ostype unmanaged \
  --rootfs local-lvm:8 \
  --memory 1024 \
  --cores 2 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --hostname nginx-oci \
  --unprivileged 1 \
  oci:docker.io/library/nginx:latest

The oci: prefix tells Proxmox to treat the source as an OCI image rather than a local template. It downloads the image layers, converts them to an LXC rootfs, and creates the container.

What OCI LXC Is Good For

  • Running upstream application images without maintaining custom templates
  • Deploying services that don't have official Proxmox templates
  • Faster iteration — pull a new image version and recreate the container in seconds

Keep in mind that OCI containers in LXC mode aren't Docker. You're running the image's filesystem inside a Linux container, not inside a container runtime. Init processes, systemd services, and multi-process containers may behave differently than expected. For most single-service images (nginx, redis, postgres), it works exactly as you'd hope.

SDN: The Features Worth Enabling Now

Proxmox SDN has existed since VE 7, but VE 9 makes it stable and significantly more capable. If you've been putting off SDN because it felt like an experimental add-on, VE 9 is the version to actually commit to it.

Enable SDN in the GUI

SDN ships disabled by default. To enable it, you need the required packages:

apt install libpve-network-perl ifupdown2

Then reload the web UI. You'll see a new SDN section under Datacenter. If you're on a cluster, every node needs the packages installed.

Creating a Simple VXLAN Zone

The most useful SDN configuration for homelabs is a Simple zone — a VXLAN overlay that lets VMs on different nodes communicate over the same logical network without manual VLAN trunk configuration.

In the Proxmox UI:

  1. Go to Datacenter → SDN → Zones and click Add → Simple
  2. Give it an ID like homelab-zone
  3. Under Nodes, select all nodes in your cluster
  4. Click Create

Next, create a VNet:

  1. Go to Datacenter → SDN → VNets and click Add
  2. Set the Zone to homelab-zone
  3. Give it a name like vnet-homelab
  4. Set a VLAN tag if needed

Finally, apply the SDN configuration:

pvesh set /cluster/sdn

Or click Apply in the SDN UI. Your VNet now appears as a bridge option when creating VMs and containers on any node in the cluster.

BGP Zone (New in VE 9)

For more advanced networking, VE 9 adds a BGP zone type that integrates with your existing routing infrastructure. This is aimed at home labs with a dedicated router VM (like OPNsense or VyOS) acting as a BGP peer.

Configure it by selecting BGP as the zone type and providing your peer's ASN and IP. This lets you advertise VM network ranges upstream without static routes.

QEMU 9.2: VM Improvements to Take Advantage Of

Proxmox VE 9 ships QEMU 9.2, which brings several improvements worth knowing about.

Enable Newer Machine Types

Existing VMs stay on their current machine type for compatibility. For new VMs, or when you're ready to update existing ones, set the machine type to q35 with the latest version:

In the VM's Hardware tab, set Machine to pc-q35-9.2. For Windows VMs, q35 is strongly preferred. For Linux VMs, it enables modern PCIe device support.

To update an existing VM via CLI:

qm set 101 --machine q35

Restart the VM for the change to take effect.

VirtIO Improvements

QEMU 9.2 includes updates to VirtIO-net and VirtIO-blk that improve throughput under heavy I/O. Make sure your VMs are using:

  • virtio-scsi-single or virtio-scsi for disk controllers (not IDE or SATA)
  • virtio for network interfaces
  • virtio-gl for display if using SPICE with GPU acceleration

Check and update via the Hardware tab or CLI:

qm set 101 --scsihw virtio-scsi-single

Linux Kernel 6.11: Hardware Support Wins

Kernel 6.11 ships as the default in VE 9. For most users this is transparent, but there are specific hardware scenarios where it matters.

Intel N-series and AMD Ryzen 8000-series Support

Kernel 6.11 includes updated drivers for Intel's N100/N200/N305 mini PC CPUs and AMD's newer integrated GPU lineup. If you were running VE 8 on N100 hardware with quirks around power management or iGPU passthrough, VE 9 resolves most of those issues.

For Intel N100 iGPU passthrough to LXC containers, the device should now enumerate cleanly:

ls /dev/dri/
# card0  renderD128

Add it to an LXC container:

pct set 200 --dev0 /dev/dri/renderD128

Checking Your Kernel Version

uname -r
# 6.11.x-x-pve

If you see an older kernel after upgrading, check that you're booting from the correct entry:

grub-set-default 0
update-grub

UI Improvements Worth Knowing

The Proxmox web UI received several practical improvements in VE 9 that make daily administration faster.

Bulk VM and Container Actions

You can now select multiple VMs or containers and perform bulk actions — start, stop, shutdown, or delete — without touching each one individually. Hold Ctrl or Shift in the resource tree to multi-select, then right-click for the action menu.

This is especially useful when running a cluster and wanting to migrate a group of VMs to another node before maintenance.

Improved Task Log Viewer

Task logs in VE 9 now show real-time streaming output for long-running operations like migrations and backups. You no longer need to watch a blank progress bar and guess what's happening. Open any running task from Datacenter → Tasks to see live output.

Summary Dashboard

The node summary page has been reorganized to surface more useful metrics by default — CPU steal time, memory ballooning status, and storage I/O. Check it via Node → Summary.

Ceph Squid: What Changed

If you're running Proxmox Ceph (hyper-converged storage), VE 9 upgrades you to Ceph Squid (19.x). The key improvements:

  • Crimson OSD (experimental) — a rewritten OSD implementation with better multi-core scaling
  • Improved BlueStore performance — especially for small random writes
  • New ceph orch commands — simplified cluster management

For most users, the upgrade is automatic when you upgrade Proxmox. Verify Ceph health post-upgrade:

ceph health detail
ceph osd tree

Don't enable experimental Crimson OSDs in production. They're worth testing in a dedicated lab environment.

Proxmox Datacenter Manager Integration

Proxmox VE 9 adds early integration with Proxmox Datacenter Manager (PDM), a separate product for managing multiple independent Proxmox clusters from a single pane of glass. It's still in early stages, but if you run more than one cluster, it's worth setting up.

PDM is installed separately on a dedicated node or VM:

apt install proxmox-datacenter-manager

Then access it at https://<pdm-host>:8007. Add your clusters by connecting each one's API endpoint. You can view resource usage, VMs, and tasks across all clusters without logging into each one individually.

PDM doesn't yet support cross-cluster migration or centralized backup management, but those are on the roadmap.

Post-Upgrade Checklist for VE 9

After upgrading from VE 8, run through this checklist to make sure you're getting the most out of VE 9:

  1. Install SDN packagesapt install libpve-network-perl ifupdown2
  2. Update VM machine types — switch new VMs to q35 and enable latest QEMU machine version
  3. Verify kernel version — confirm you're on 6.11.x-pve
  4. Check Ceph health (if applicable) — ceph health detail
  5. Try OCI LXC — pull a test container from Docker Hub using the oci: prefix
  6. Review SDN zones — consider migrating your VLAN setup to SDN VNets for better cluster-wide networking
  7. Enable bulk actions — test multi-select in the UI for your most-used VM management tasks
  8. Check IOMMU groups — if doing GPU passthrough, verify device groupings still look correct after the kernel upgrade with find /sys/kernel/iommu_groups/ -type l

Conclusion

Proxmox VE 9 isn't just a maintenance release — OCI LXC support alone changes how you can approach running containerized services, and the SDN improvements make network segmentation genuinely accessible without enterprise hardware. The best approach is to start with what solves an immediate problem: if you've been fighting with LXC templates, try OCI LXC this week. If your cluster networking feels fragile, invest a Saturday in SDN zones. You don't need to enable everything at once, but there's enough here that leaving VE 9 running in pure VE-8-compatibility mode is leaving a lot on the table.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →