Proxmox VE on Raspberry Pi 5: Full Setup Guide

Install Proxmox VE on a Raspberry Pi 5 with this complete guide covering ARM quirks, storage setup, VM limits, and real-world performance tips.

Proxmox Pulse Proxmox Pulse
10 min read
Raspberry Pi 5 board with glowing holographic virtual machine interfaces floating above it

Running a full hypervisor on a credit-card-sized computer sounds like a meme, but with the Raspberry Pi 5's quad-core Cortex-A76 CPU and up to 8 GB of RAM, Proxmox VE is genuinely usable for lightweight homelab workloads. It won't replace your x86 server, but as a low-power always-on node for a few LXC containers or a small VM cluster, it punches above its weight. This guide walks through everything—flashing, first boot, the ARM-specific gotchas, and what you can realistically run.

What You'll Need

Before you start, gather your hardware:

  • Raspberry Pi 5 (4 GB minimum, 8 GB recommended)
  • Active cooler — the Pi 5 runs hot under sustained load; the official Raspberry Pi Active Cooler is worth the few dollars
  • NVMe SSD via M.2 HAT (strongly recommended over SD card or USB)
  • A way to write the image: Raspberry Pi Imager or dd
  • A network cable — Wi-Fi works but adds instability for a hypervisor

SD cards are fine for experimenting, but they will fail under the write load of a hypervisor. An NVMe drive via the Pi 5's PCIe slot is the right move for anything beyond a one-afternoon test.

Why Proxmox on ARM Is Unusual

Proxmox VE is not officially released for ARM. What the homelab community uses is a community-maintained port called Proxmox Port for Raspberry Pi (sometimes called pimox or the Debian-based port maintained at https://github.com/jiangcuo/Proxmox-Port). It tracks the upstream Proxmox packages and builds them against arm64 Debian.

Key things to understand going in:

  • KVM is supported — the Pi 5 has hardware virtualization, so KVM VMs work
  • Only 64-bit ARM guests run efficiently; x86 VMs require QEMU software emulation and are extremely slow
  • LXC containers work great and are the primary workload you'll want to run
  • No EFI/UEFI by default for Pi firmware — some VMs need workarounds
  • Proxmox Cluster works, but mixing ARM and x86 nodes requires careful planning (no live migration across architectures)

If you need Windows VMs, x86 Linux, or anything that demands x86-only software, this is not the right platform. If you need a lightweight node running a dozen Alpine or Debian LXC containers, it's excellent.

Installing the Base System

Step 1: Flash Raspberry Pi OS 64-bit (Lite)

The ARM Proxmox port builds on top of Debian, so we start with Raspberry Pi OS Lite (64-bit). Open Raspberry Pi Imager, select Raspberry Pi OS Lite (64-bit), and write it to your NVMe or SD card.

Before writing, use the gear icon in Imager to:

  • Set a hostname (e.g., pve-pi)
  • Enable SSH with a password or key
  • Disable the default user login prompt (set username/password)

Step 2: First Boot and Update

Boot the Pi, SSH in, and update fully before adding Proxmox:

sudo apt update && sudo apt full-upgrade -y
sudo reboot

After reboot, check your architecture:

uname -m
# Should output: aarch64

Step 3: Add the Proxmox ARM Repository

The community port is hosted separately from the official Proxmox repositories. Add it:

# Add the GPG key
curl -L https://mirrors.apqa.cn/proxmox/debian/pveport.gpg | sudo tee /usr/share/keyrings/pveport.gpg > /dev/null

Add the repository (bookworm = Debian 12)

echo "deb [arch=arm64 signed-by=/usr/share/keyrings/pveport.gpg] https://mirrors.apqa.cn/proxmox/debian/pve bookworm port" | sudo tee /etc/apt/sources.list.d/pveport.list

Update and install the Proxmox VE kernel and packages:

sudo apt update
sudo apt install proxmox-ve postfix open-iscsi -y

During the postfix installation, select Local only unless you need outbound mail. This is just for system notifications.

Step 4: Configure the Hostname and /etc/hosts

Proxmox requires the hostname to resolve to the node's IP address. Edit /etc/hosts:

sudo nano /etc/hosts

Make sure you have a line like this (use your actual static IP):

192.168.1.50 pve-pi.local pve-pi

Remove or comment out any line that maps 127.0.1.1 to your hostname — Proxmox wants the real IP, not loopback.

Step 5: Set a Static IP

A hypervisor must have a static IP. Edit /etc/dhcpcd.conf (Raspberry Pi OS uses dhcpcd by default):

sudo nano /etc/dhcpcd.conf

Add at the bottom:

interface eth0 static ip_address=192.168.1.50/24 static routers=192.168.1.1 static domain_name_servers=192.168.1.1

Reboot to apply.

Step 6: Reboot into the Proxmox Kernel

After installation completes, reboot. The Proxmox kernel should load automatically:

sudo reboot

SSH back in and verify:

uname -r
# Should show something like: 6.8.12-1-pve or similar

Now open a browser and navigate to https://192.168.1.50:8006. You'll get a TLS warning (self-signed cert) — accept it. Log in with root and your password.

Post-Install Configuration

Remove the Subscription Notice

Just like on x86, the web UI will nag about a missing subscription. Dismiss it or apply the community no-subscription fix:

sudo sed -i.backup -z 's/res === null || res === undefined || !res || res\n\t\t\t.data.status.toLowerCase() !== .active./false/g' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
sudo systemctl restart pveproxy

Configure Storage

If you're running from NVMe, Proxmox will default to using the local directory for ISOs and LXC templates, and local-lvm for VM disks. For a Pi without a dedicated LVM setup, you may want to use a directory-based storage instead:

  1. Go to Datacenter → Storage
  2. Check what's configured — you may only have local
  3. For a simple setup, the local directory at /var/lib/vz handles everything

For better performance with ZFS on the NVMe (if your HAT supports it):

# Check your NVMe device name first
lsblk

Create a ZFS pool (replace nvme0n1 with your device)

zpool create -f -o ashift=12 rpool /dev/nvme0n1

Then add it in Datacenter → Storage → Add → ZFS.

Disable the Enterprise Repo

The enterprise repository requires a paid subscription. Disable it:

sudo nano /etc/apt/sources.list.d/pve-enterprise.list
# Comment out the line by adding # at the start

Running LXC Containers

LXC containers are the sweet spot for Proxmox on Pi. They're lightweight, fast to create, and don't require hardware virtualization overhead.

Download an ARM64 Template

In the web UI:

  1. Go to local → CT Templates
  2. Click Templates
  3. Search for debian or ubuntu — make sure you pick the amd64 template... wait, you won't see ARM64 templates in the standard list.

The built-in template list only offers x86 templates. For ARM64, download manually:

# Download an ARM64 Debian template
wget https://images.linuxcontainers.org/images/debian/bookworm/arm64/default/$(curl -s https://images.linuxcontainers.org/images/debian/bookworm/arm64/default/ | grep -oP '\d{8}_\d{2}:\d{2}' | tail -1)/rootfs.tar.xz -O /var/lib/vz/template/cache/debian-12-arm64.tar.xz

Alternatively, use pveam with a custom URL or upload the template manually through the web UI.

Create a Container

In the web UI, click Create CT:

  • Template: select your downloaded ARM64 template
  • Disk: 4–8 GB is fine for most containers
  • CPU: 1–2 cores
  • Memory: 256–512 MB for lightweight services
  • Network: bridge to vmbr0

Start it and open the console. It should boot in under 5 seconds.

Running KVM Virtual Machines

KVM VMs work on the Pi 5 but come with significant constraints.

What Works

  • 64-bit ARM Linux VMs (Ubuntu ARM, Debian ARM, Alpine ARM)
  • Lightweight server workloads — DNS, web servers, small databases
  • ARM-compiled Docker images inside VMs

What Doesn't Work Well

  • x86 VMs — these fall back to QEMU software emulation, making them 10–50x slower
  • Windows — no ARM Windows support in the standard VM path (Windows on ARM exists but setup is complex and not officially supported here)
  • Memory-heavy VMs — the Pi 5 tops out at 8 GB total, leaving maybe 5–6 GB for VMs after the OS and Proxmox overhead

Creating an ARM Linux VM

When creating a VM:

  • Set Machine to virt (not i440fx or q35 — those are x86)
  • Set BIOS to OVMF (UEFI) with an EFI disk
  • Use a virtio disk and network adapter
  • Boot from an ARM64 ISO (Ubuntu Server for ARM, etc.)
# Download Ubuntu Server ARM64 ISO
wget https://cdimage.ubuntu.com/releases/24.04/release/ubuntu-24.04-live-server-arm64.iso
# Upload via the web UI: local → ISO Images → Upload

Real-World Performance Expectations

Here's what you can realistically run on a Pi 5 with 8 GB RAM:

Workload Performance
10–15 LXC containers (Alpine/Debian) Excellent
2–3 small ARM64 VMs Good
Pi-hole + Unbound (LXC) Excellent
Home Assistant (LXC) Good
Gitea / Forgejo (LXC) Good
Jellyfin with transcoding Poor (no hardware transcode)
Any x86 VM Very poor

The Pi 5's PCIe 2.0 x1 lane means NVMe throughput is capped around 400–500 MB/s sequential, which is still dramatically better than SD card or USB storage.

Thermal Management

The Pi 5 will thermal throttle without adequate cooling. Under sustained VM or container load, you'll hit 85°C without a heatsink.

Install the official active cooler and verify temps are sane:

# Check CPU temp
vcat /sys/class/thermal/thermal_zone0/temp
# Divide by 1000 for Celsius — e.g., 52000 = 52°C

You can also monitor via the Proxmox web UI under Node → Summary if the sensors package is installed:

sudo apt install lm-sensors -y
sudo sensors-detect --auto
sensors

Adding the Pi to an Existing Proxmox Cluster

You can join the Pi node to an existing x86 Proxmox cluster, but be aware of the limitations:

  • No live migration between ARM and x86 nodes (architecture mismatch)
  • Shared storage must use network-based storage (NFS, Ceph) — direct attachment doesn't cross nodes
  • The Pi can host ARM workloads in a cluster alongside x86 nodes

To join an existing cluster:

# On the Pi node
pvecm add 192.168.1.100  # IP of an existing cluster node

This works, but think carefully about quorum — a two-node cluster (one x86, one Pi) has quorum problems. Either use three nodes or add a QDevice.

Troubleshooting Common Issues

Web UI Won't Load

Check that the Proxmox services are running:

systemctl status pveproxy
systemctl status pvedaemon
systemctl restart pveproxy pvedaemon

Container Won't Start — "arch" Error

If you accidentally downloaded an x86 template, the container will fail to start with an architecture error. Delete it and use an ARM64 template.

Poor NVMe Performance

Check the HAT you're using. Some third-party M.2 HATs have firmware or power issues. The official Raspberry Pi M.2 HAT+ is the most reliable option. Verify the drive is running at PCIe Gen 2 speed:

lspci -vvv | grep -A 10 NVMe

kvm Module Not Loaded

lsmod | grep kvm
# Should show kvm and kvm_arm_host

If missing:

modprobe kvm modprobe kvm_arm_host

Add to /etc/modules to persist across reboots.

Is It Worth It?

For a primary production hypervisor — no. The Pi 5 has real constraints: no x86 support, 8 GB RAM ceiling, limited PCIe bandwidth, and community (not official) Proxmox support.

For a low-power homelab node running a handful of LXC containers? Absolutely. At 5–10 watts idle versus 50–100 watts for a mini PC or server, it's an excellent always-on node for Pi-hole, Home Assistant, Gitea, or a personal VPN endpoint. The Proxmox web UI makes managing those containers significantly easier than raw LXC commands.

The sweet spot is treating the Pi 5 Proxmox node as a container host, not a VM host. Lean into LXC, avoid x86 VMs, and it'll serve you well.

Conclusion

Proxmox VE on a Raspberry Pi 5 is a legitimate homelab setup when you work with its constraints rather than against them. The community ARM port is actively maintained, KVM hardware virtualization works for ARM64 guests, and LXC containers run with near-native performance. The installation process is more manual than the x86 ISO path, but it's not difficult — flash Raspberry Pi OS, add the ARM Proxmox repo, install the packages, and you have a functional hypervisor in under an hour. Pair it with an NVMe drive, an active cooler, and realistic expectations about workloads, and the Pi 5 becomes a surprisingly capable low-power node in your homelab arsenal.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →