How to Install Proxmox VE on Any Hardware

Step-by-step guide to installing Proxmox VE 8.x on bare metal hardware, from creating a bootable USB to your first login at the web interface.

Proxmox Pulse Proxmox Pulse
11 min read
proxmox installation homelab bare-metal
Server motherboard with glowing orange circuit traces during bare metal installation

There's something deeply satisfying about wiping a machine and installing a hypervisor from scratch. No cloud provider between you and the metal. No monthly bill ticking up every time you spin up a VM. Just you, a USB stick, and a box that's about to become the backbone of your homelab.

I've installed Proxmox VE on everything from retired Dell OptiPlex desktops to dual-socket Supermicro servers, and the process is remarkably consistent. This guide walks through the entire installation of Proxmox VE 8.x, including the decisions that trip up newcomers and the gotchas I wish someone had told me about on my first install.

Hardware Requirements

Let's get the specs out of the way. Proxmox will technically boot on surprisingly modest hardware, but "boots" and "runs well" are different things.

Minimum (it'll work, barely):

  • 64-bit CPU with VT-x/VT-d support
  • 2 GB RAM
  • 32 GB storage
  • One NIC

What you actually want:

  • 4+ core CPU with VT-x, VT-d, and AES-NI
  • 32 GB+ RAM (64 GB if you're running more than a handful of VMs)
  • 256 GB+ SSD for the OS, separate storage for VMs
  • At least two NICs (one for management, one for VM traffic)

I've found that the sweet spot for a homelab node is something like a used Dell PowerEdge R730 or an HP ProLiant DL380 Gen9. You can grab these for $200-400 on eBay and they'll run circles around most consumer hardware for virtualization workloads. If noise is a concern, the Dell OptiPlex Micro series or an Intel NUC works well for a quieter setup, just don't expect to run 20 VMs on 16 GB of RAM.

One thing worth checking before you buy: make sure the NIC is supported. Intel NICs (I210, I350, X550) are basically guaranteed to work. Realtek consumer NICs usually work but can be flaky under heavy load. Broadcom is hit or miss depending on the specific model. Check the Proxmox hardware compatibility list if you're unsure.

Getting the ISO

Head to proxmox.com/en/downloads and grab the latest Proxmox VE ISO. As of this writing, that's Proxmox VE 8.3. The download is around 1.2 GB.

Always verify the checksum. It takes ten seconds and saves you from chasing phantom installer bugs caused by a corrupt download:

sha256sum proxmox-ve_8.3-1.iso
# Compare against the SHA256 listed on the download page

Creating a Bootable USB

You've got options here, and they all work. Pick whichever you're comfortable with.

Option 1: dd (Linux/macOS)

The classic. Find your USB device first — get this wrong and you'll wipe the wrong disk:

lsblk
# Identify your USB drive - let's say it's /dev/sdb

sudo dd bs=4M if=proxmox-ve_8.3-1.iso of=/dev/sdb conv=fsync status=progress

The status=progress flag is a sanity saver. Without it, dd just sits there silently and you're left wondering if it's working or if your USB is dead.

Option 2: Ventoy

This is what I use these days. Install Ventoy on a USB drive once, then just copy ISO files onto it. You can keep multiple ISOs on the same stick and pick which one to boot. Incredibly handy when you're juggling Proxmox, Ubuntu Server, and a rescue ISO.

# Install Ventoy to USB (one-time setup)
sudo ventoy -i /dev/sdb

# Then just copy the ISO
cp proxmox-ve_8.3-1.iso /media/ventoy/

Option 3: Rufus (Windows)

If you're on Windows, Rufus is the go-to. Select the ISO, select the USB drive, use DD mode (not ISO mode — this matters), and write. Takes about 3 minutes.

BIOS Configuration

Before booting from USB, there are a few BIOS settings you need to get right. Miss these and you'll either fail to boot or lose out on hardware virtualization features.

Must enable:

  • VT-x (Intel Virtualization Technology) — without this, KVM won't work at all
  • VT-d (Intel VT for Directed I/O) — needed for PCI passthrough
  • AMD-Vi if you're on AMD (the equivalent of VT-d)

Should configure:

  • Boot order — set USB as first boot device
  • UEFI mode — Proxmox supports both UEFI and legacy BIOS, but UEFI is recommended for new installs. You'll need GPT partition tables either way on drives over 2 TB.
  • Secure Boot — disable it. Proxmox doesn't ship signed bootloaders for Secure Boot. You can technically get it working with custom keys, but it's not worth the hassle for a hypervisor that sits behind your firewall.

On server hardware like Dell PowerEdge, you'll find these under "Processor Settings" and "Boot Settings" in the iDRAC/BIOS. On consumer boards, it's usually under "Advanced" or "CPU Configuration."

The Installer Walkthrough

Boot from your USB and you'll see the Proxmox VE bootloader menu. Select "Install Proxmox VE (Graphical)" unless you're installing on a headless machine over serial console.

License Agreement

Click "I agree." Not much to decide here.

Target Disk Selection

This is where your first real decision comes in. The installer asks you to select a target disk and filesystem. Click "Options" to see the full list.

ext4 — The safe choice. Battle-tested, fast, low overhead. If you're installing on a single disk or you just want things to work without thinking about it, pick ext4. I use ext4 on all my single-disk nodes.

XFS — Slightly better performance for large files and high-throughput workloads. Honestly, the difference between ext4 and XFS is negligible for most homelab use cases. Pick whichever you're more comfortable troubleshooting.

ZFS (RAID0, RAID1, RAIDZ, etc.) — This is where it gets interesting. If you have multiple disks and want built-in redundancy, checksumming, and snapshots without a separate storage layer, ZFS is excellent. But it comes with caveats:

  • ZFS wants RAM. The rule of thumb is 1 GB of RAM per 1 TB of storage, plus whatever your VMs need. On a 32 GB system with 8 TB of storage, that's 8 GB just for ZFS ARC cache.
  • Don't use RAIDZ1 on drives larger than 2 TB. The rebuild times are brutal and your chance of a second drive failure during rebuild is non-trivial. Use RAIDZ2 or mirrors.
  • If you're using ZFS on the boot drive, you can't easily resize partitions later.

For a first install, I'd recommend ext4 on a single SSD and adding ZFS storage pools later through the GUI. Keep the OS install simple.

The installer also lets you configure disk size, swap size, and maximum root filesystem size. The defaults are generally fine, but I like to limit maxroot to about 60 GB and leave the rest for local-lvm storage:

hdsize: 256
swapsize: 8
maxroot: 60
maxvz: 0
minfree: 16

Setting maxvz to 0 gives all remaining space to LVM-thin, which is what you want for VM disk images.

Network Configuration

The installer will detect your network interfaces. Pick the one you want as your management interface. For a homelab, this is usually whatever's plugged into your main network.

Fill in:

  • Hostname (FQDN): Something like pve1.homelab.local. Must be a fully qualified domain name — pve1 alone won't work.
  • IP Address: Pick a static IP on your LAN. I typically use something like 192.168.1.50/24 for my first node.
  • Gateway: Your router, usually 192.168.1.1
  • DNS Server: Your router or a dedicated DNS server. 192.168.1.1 works, or use 1.1.1.1 if you want Cloudflare's resolver.

Don't use DHCP for a hypervisor. Ever. If your DHCP lease expires or the server reboots and gets a different IP, you lose access to the management interface and every VM bridge goes sideways.

Password and Email

Set a root password. Make it strong — this is the admin account for your entire virtualization platform. The email address is used for system notifications (cron job output, ZFS scrub results, etc.). Use a real email if you've set up SMTP, otherwise any placeholder works.

Summary and Install

Review everything, click "Install," and wait. On an SSD, the install takes about 3-5 minutes. On a spinning disk, maybe 10.

When it finishes, remove the USB drive and let it reboot.

First Login

After reboot, you'll see a console login prompt with the IP address of the web interface displayed:

------------------------------------------------------------------------------
Welcome to the Proxmox Virtual Environment. Please use your web browser to
configure this server - connect to:

  https://192.168.1.50:8006/

------------------------------------------------------------------------------

Open a browser and navigate to https://192.168.1.50:8006. You'll get a certificate warning because Proxmox uses a self-signed cert by default. Accept it and proceed.

Log in with:

  • Username: root
  • Realm: Linux PAM standard authentication
  • Password: whatever you set during install

The "No Valid Subscription" Popup

First thing you'll see after login is a popup saying "No valid subscription." This shows up every time you log in on the free version. It's not a trial expiration — Proxmox VE is fully functional without a subscription. The subscription is for enterprise support and access to the stable enterprise repository.

To suppress this popup, SSH into your node and edit the JavaScript file that triggers it:

ssh root@192.168.1.50

# Find the subscription check
sed -Ei.bak "s/NotFound/Active/g; s/notfound/active/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js

# Restart the web proxy
systemctl restart pveproxy.service

After that, clear your browser cache or do a hard refresh (Ctrl+Shift+R) and the nag is gone. Note that this change will be overwritten on Proxmox package updates, so you'll need to reapply it after major upgrades.

Verifying the Installation

Before you start creating VMs, run through a few quick checks to make sure everything is healthy.

Check Virtualization Support

# Should return a number greater than 0
egrep -c '(vmx|svm)' /proc/cpuinfo
8

# Check if KVM module is loaded
lsmod | grep kvm
kvm_intel            458752  0
kvm                 1327104  1 kvm_intel

If egrep returns 0, VT-x isn't enabled in your BIOS. Go back and fix that.

Check Storage

pvesm status
Name             Type     Status           Total            Used       Available        %
local            dir      active        59295828        3842264        52401704    6.48%
local-lvm  lvmthin      active       192675840               0       192675840    0.00%

You should see at least local (for ISOs and templates) and local-lvm (for VM disks). If local-lvm is missing or shows 0 total, something went wrong with the LVM setup during install.

Check Network

ip addr show vmbr0
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:2b:3c:4d:5e:6f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.50/24 scope global vmbr0
       valid_lft forever preferred_lft forever

vmbr0 is the Linux bridge that Proxmox creates during installation. This is what your VMs will connect to for network access. Make sure it's UP and has your expected IP.

Update the System

Even on a fresh install, there are usually pending updates:

apt update && apt full-upgrade -y

You might hit repository errors at this point if you haven't switched from the enterprise repo to the no-subscription repo yet. We'll cover that in the post-install checklist, but the quick fix is:

# Disable enterprise repo (requires subscription)
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list

# Add no-subscription repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

apt update && apt full-upgrade -y

Troubleshooting Common Install Issues

A few problems I've run into over the years that are worth mentioning:

Installer hangs on "Detecting country" — This happens when the installer can't reach the internet to do a GeoIP lookup. It'll time out eventually (about 30 seconds), but if your network isn't connected during install, just wait it out.

Installer doesn't see NVMe drives — Some older BIOS versions don't expose NVMe drives properly. Update your BIOS/UEFI firmware. On Dell servers, this is usually available through the iDRAC lifecycle controller.

Black screen after install — If you're using a GPU that doesn't play nice with the default nomodeset kernel parameter, you might get a black screen. Boot into the GRUB menu, edit the boot entry, and add nomodeset to the kernel command line. Then make it permanent in /etc/default/grub.

Web UI not accessible after install — Check that pveproxy is running (systemctl status pveproxy), that your IP is correct (ip addr), and that you're using https not http. Port 8006, not 80 or 443.

Wrapping Up

That's Proxmox installed and accessible. The whole process takes about 15 minutes once you've done it a couple of times. The next step is the post-install configuration — switching repositories, hardening the system, setting up storage, and all the other tweaks that turn a fresh install into a production-ready hypervisor.

The beauty of Proxmox is that the installer gets out of your way. It doesn't try to be clever or ask you questions you don't understand yet. Pick a disk, set a password, assign an IP, and you're running. Everything else can be configured after the fact through the web UI or the command line.

If you're coming from ESXi, you'll feel right at home with the web interface. If you're coming from bare KVM/QEMU, you'll appreciate not having to write XML files to create a VM. Either way, welcome to Proxmox.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →