Migrating from VMware ESXi to Proxmox VE

Practical guide to migrating VMs from VMware ESXi to Proxmox VE, covering VMDK conversion, VirtIO drivers, and post-migration performance tuning.

Proxmox Pulse Proxmox Pulse
14 min read
proxmox vmware migration esxi
Data particles migrating between two server platforms from blue to orange

The VMware-to-Proxmox migration has gone from a niche hobby project to something I'm seeing everywhere. Since Broadcom's acquisition of VMware and the subsequent licensing overhaul — killing perpetual licenses, bundling everything into expensive subscription tiers, and sunsetting the free ESXi hypervisor — a lot of admins are looking at their renewal quotes and deciding it's time to move on.

I migrated a 12-VM ESXi environment to Proxmox last year and have since helped several others do the same. The process isn't difficult, but there are enough gotchas that going in blind will cost you a weekend. This guide covers what I've learned.

Before You Start: Planning the Migration

Don't just start exporting VMs. Take an inventory first.

For each VM on your ESXi host, document:

  • OS type and version
  • Disk format (thin or thick provisioned)
  • Number of disks and their sizes
  • Network configuration (static IPs, multiple NICs, VLANs)
  • Boot mode (BIOS or UEFI)
  • Any USB, serial, or PCI passthrough devices
  • Snapshots (you'll want to consolidate these before export)

I keep a spreadsheet. It sounds tedious, but it saves you from discovering halfway through that your critical database server was UEFI-booted with a second data disk on a different datastore.

Also: take snapshots of everything in ESXi before you start. If something goes wrong during migration, you want to be able to roll back to a known-good state.

Consolidate Snapshots First

This is the one people forget. If a VM has snapshots in ESXi, the disk files are split into a chain of delta files. Exporting a VM with snapshots either fails or produces an incomplete image.

In the vSphere client: right-click the VM > Snapshots > Delete All. Wait for the consolidation to finish — on large VMs with deep snapshot chains, this can take a while and will temporarily spike your storage I/O.

Method 1: OVF Export + Convert

This is the cleanest approach and what I recommend for most migrations.

Step 1: Export from ESXi

You can export via the vSphere client GUI (right-click VM > Export OVF Template) or use ovftool from the command line, which is faster for batch exports:

# Install ovftool (download from VMware/Broadcom)
# Export a single VM
ovftool \
  --noSSLVerify \
  vi://root@10.0.1.20/vm-name \
  /export/vm-name/

# This creates:
# /export/vm-name/vm-name.ovf
# /export/vm-name/vm-name-disk1.vmdk
# /export/vm-name/vm-name-disk2.vmdk (if multiple disks)

The --noSSLVerify flag skips certificate validation for ESXi's self-signed cert. If you've got a lot of VMs, script it:

#!/bin/bash
ESXI_HOST="10.0.1.20"
ESXI_USER="root"
EXPORT_DIR="/export"

VMS=("webserver" "database" "mailserver" "monitoring" "wiki")

for vm in "${VMS[@]}"; do
    echo "Exporting ${vm}..."
    ovftool --noSSLVerify \
        "vi://${ESXI_USER}@${ESXI_HOST}/${vm}" \
        "${EXPORT_DIR}/${vm}/"
done

For the GUI approach, the vSphere web client downloads the files to your local machine, which is painfully slow for large VMs. I strongly prefer ovftool running on a machine with fast network access to the ESXi host.

Step 2: Transfer to Proxmox

Get the VMDK files onto your Proxmox host. SCP, rsync, NFS share — whatever is fastest:

# rsync is ideal for large files (restartable, shows progress)
rsync -avP /export/ root@192.168.1.50:/tmp/migration/

If your VMDKs are large (hundreds of GB), consider setting up a temporary NFS share on the Proxmox host to avoid copying. Mount the export directory directly:

# On Proxmox
mount -t nfs 10.0.1.100:/export /mnt/migration

Step 3: Convert VMDK to qcow2

Proxmox can work with VMDK files directly, but converting to qcow2 gives you better snapshot support and is the native format for QEMU/KVM:

# Basic conversion
qemu-img convert -f vmdk -O qcow2 vm-name-disk1.vmdk vm-name-disk1.qcow2

# With progress output and preallocation for better write performance
qemu-img convert -p -f vmdk -O qcow2 \
  -o preallocation=falloc \
  vm-name-disk1.vmdk \
  vm-name-disk1.qcow2

Conversion speed depends on your storage. On an NVMe SSD, a 100 GB VMDK takes about 5-8 minutes. On spinning disks, multiply that by 3-4x.

If you're using LVM-thin storage (the default local-lvm), you should convert to raw format instead of qcow2, since LVM-thin handles thin provisioning at the block level:

qemu-img convert -p -f vmdk -O raw \
  vm-name-disk1.vmdk \
  vm-name-disk1.raw

Step 4: Create the VM in Proxmox

Create a new VM shell that matches the original VM's specs. You can do this via the GUI or CLI. I prefer CLI because it's scriptable:

# Create VM with ID 100
qm create 100 \
  --name webserver \
  --memory 4096 \
  --cores 4 \
  --sockets 1 \
  --cpu host \
  --net0 virtio,bridge=vmbr0 \
  --ostype l26 \
  --scsihw virtio-scsi-single \
  --bios seabios

# For UEFI VMs, use ovmf instead:
# --bios ovmf --efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=1

Step 5: Import the Disk

Now import the converted disk into the VM:

# For qcow2 on local storage
qm importdisk 100 vm-name-disk1.qcow2 local-lvm

# For raw on LVM-thin
qm importdisk 100 vm-name-disk1.raw local-lvm

You'll see output like:

Successfully imported disk as 'unused0:local-lvm:vm-100-disk-0'

The disk is imported but not attached yet. Attach it:

qm set 100 --scsi0 local-lvm:vm-100-disk-0
qm set 100 --boot order=scsi0

For VMs with multiple disks, repeat the import for each VMDK and attach them as scsi1, scsi2, etc.

Method 2: Direct VMDK Copy (Quick and Dirty)

If you don't want to deal with OVF exports, you can SCP the VMDK files directly from the ESXi datastore:

# SSH into ESXi (enable SSH in Host > Actions > Services > Enable SSH)
# Find the VMDK files
ls /vmfs/volumes/datastore1/vm-name/
vm-name.vmdk          # Descriptor file
vm-name-flat.vmdk     # Actual data (this is the big one)

# From your Proxmox host, pull the flat VMDK
scp root@10.0.1.20:/vmfs/volumes/datastore1/vm-name/vm-name-flat.vmdk /tmp/migration/

Important: you want the -flat.vmdk file, not the descriptor .vmdk. The descriptor is a small text file that references the flat file. qemu-img can work with either, but the flat file is the actual disk data.

Then convert and import as described above.

Handling UEFI vs BIOS Boot

This is where a lot of migrations fail silently. If your ESXi VM was using EFI firmware and you create the Proxmox VM with SeaBIOS (the default), it won't boot. You'll see the BIOS looking for a boot device and finding nothing.

Check the VM's firmware type in ESXi: Edit Settings > VM Options > Boot Options > Firmware. If it says "EFI," you need to set up OVMF in Proxmox:

qm set 100 --bios ovmf
qm set 100 --efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=0

The pre-enrolled-keys=0 setting skips Secure Boot keys. You can enable Secure Boot later if needed, but get the VM booting first.

Conversely, if you mistakenly set OVMF on a BIOS-booted VM, you'll get a UEFI shell instead of your OS. Switch back to seabios and it should boot.

The VirtIO Driver Situation

Here's the biggest conceptual difference between VMware and Proxmox: the virtual hardware. VMware uses PVSCSI for storage and VMXNET3 for networking. Proxmox/KVM uses VirtIO for both. These are fundamentally different paravirtualized drivers.

If you migrate a VM and set it up with VirtIO storage and network devices, the guest OS might not have VirtIO drivers installed, resulting in:

  • No boot disk found (storage controller not recognized)
  • No network connectivity

You have three options:

Option A: Install VirtIO Drivers Before Migration (Windows)

This is the cleanest approach for Windows VMs. While the VM is still running on ESXi:

  1. Download the VirtIO drivers ISO from Fedora's site
  2. Mount it in the ESXi VM
  3. Install the drivers — just run virtio-win-guest-tools.exe from the ISO. This installs VirtIO SCSI, network, balloon, and serial drivers.
  4. Reboot the VM on ESXi to verify it still works
  5. Then proceed with the migration

After importing to Proxmox, the VM will boot with VirtIO devices because the drivers are already installed.

Option B: Use IDE/E1000 First, Then Switch

If you can't install drivers before migration, create the Proxmox VM with legacy device types that every OS already supports:

qm create 100 \
  --name webserver \
  --memory 4096 \
  --cores 4 \
  --net0 e1000,bridge=vmbr0 \
  --scsihw lsi \
  --bios seabios

The lsi SCSI controller and e1000 NIC use drivers that are built into Windows and every Linux kernel. The VM will boot, but performance will be worse than VirtIO.

Once booted, install VirtIO drivers inside the guest, then switch the devices to VirtIO through the Proxmox GUI or CLI.

Option C: Linux VMs Usually Just Work

Modern Linux kernels (anything from the last decade) include VirtIO drivers in the kernel. You can use VirtIO storage and network devices from the start. I've migrated Ubuntu, Debian, CentOS, RHEL, and Arch VMs with zero driver issues.

The only exception I've hit is very old CentOS 6 installations that used a custom kernel without VirtIO modules compiled in. Use the IDE/E1000 approach for those.

Network Adapter Changes

After migration, your VM's network interface name might change. On Linux, this means the old network config references an interface that no longer exists.

Typical scenario: the VM had ens192 (VMware's naming) and now has ens18 or enp0s18 (VirtIO naming). Your /etc/network/interfaces or Netplan config references ens192, so the network doesn't come up.

Fix it by booting the VM and updating the config:

# Find the new interface name
ip link show
1: lo: <LOOPBACK,UP,LOWER_UP>
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP>

# For Debian/Ubuntu with /etc/network/interfaces
sed -i 's/ens192/ens18/g' /etc/network/interfaces
systemctl restart networking

# For Ubuntu with Netplan
sed -i 's/ens192/ens18/g' /etc/netplan/*.yaml
netplan apply

# For RHEL/CentOS with NetworkManager
nmcli connection show
# Delete the old connection profile, create a new one for ens18
nmcli connection delete "ens192"
nmcli connection add con-name ens18 type ethernet ifname ens18 \
  ipv4.addresses 192.168.1.100/24 \
  ipv4.gateway 192.168.1.1 \
  ipv4.dns "192.168.1.1" \
  ipv4.method manual

On Windows, the new virtual NIC shows up as a new network adapter and the old VMware adapter disappears. Windows handles this gracefully — it assigns the new adapter to the same network but you lose any static IP configuration. Check and reconfigure via Network and Sharing Center.

Also check: if your VM used multiple NICs on ESXi, make sure you've created the corresponding number of NICs in Proxmox. Each --net0, --net1, etc. maps to a virtual NIC.

Handling Windows VMs Specifically

Windows VMs deserve their own section because they're always the fiddly ones.

Activation Issues

Changing the underlying hardware (even virtual hardware) can sometimes trigger Windows reactivation. This is especially true for OEM licenses tied to specific hardware identifiers. Have your license keys ready, or if you're using a KMS server, make sure the Proxmox VM can reach it.

Setting the SMBIOS UUID to match the ESXi VM's UUID can sometimes help:

# Get the UUID from the VMX file or ESXi
grep uuid.bios vm-name.vmx
# uuid.bios = "42 13 ab cd ..."

# Set it in Proxmox
qm set 100 --smbios1 uuid=4213abcd-...

SCSI Controller Detection

If the VM was using a PVSCSI controller on ESXi and you set the Proxmox VM to VirtIO SCSI, Windows won't find the boot disk because it doesn't have the driver loaded for the new controller.

The safest path: use --scsihw lsi for the initial boot. Then:

  1. Add the VirtIO ISO as a CD-ROM drive
  2. Boot the VM
  3. Open Device Manager
  4. Add a small secondary disk with VirtIO SCSI
  5. Windows will ask for a driver — point it to the ISO
  6. After the driver installs, shut down
  7. Switch the boot disk from LSI to VirtIO SCSI
  8. Remove the secondary disk

QEMU Guest Agent

Install the QEMU Guest Agent inside Windows for proper shutdown, IP reporting, and filesystem quiescing:

# Mount the VirtIO ISO in the VM
# Run the guest agent installer from D:\guest-agent\qemu-ga-x86_64.msi

On Linux, it's simpler:

apt install qemu-guest-agent    # Debian/Ubuntu
yum install qemu-guest-agent    # RHEL/CentOS

systemctl enable --now qemu-guest-agent

Then enable it on the Proxmox side:

qm set 100 --agent enabled=1,fstrim_cloned_disks=1

Performance Tuning After Migration

Migrated VMs often run noticeably slower until you tune a few things.

CPU Type

Don't use kvm64 (the default in some Proxmox versions). Set the CPU type to host to pass through your actual CPU features:

qm set 100 --cpu host

This gives the VM access to AES-NI, AVX, and other instruction sets that improve performance. The only downside is that live migration between hosts with different CPU models won't work, but for a homelab, this is rarely an issue.

Disk I/O

If you're using VirtIO SCSI (which you should be), enable IO thread and discard:

qm set 100 --scsi0 local-lvm:vm-100-disk-0,iothread=1,discard=on,ssd=1
  • iothread=1 — gives the disk its own dedicated I/O thread instead of sharing with the main QEMU thread
  • discard=on — enables TRIM/UNMAP passthrough so the guest can free unused blocks
  • ssd=1 — tells the guest the disk is an SSD, which affects I/O scheduling in the guest OS

Ballooning

The VirtIO balloon driver lets Proxmox reclaim unused memory from VMs. Install the driver in the guest and enable it:

qm set 100 --balloon 2048

This sets the minimum memory to 2 GB while allowing the VM to use its full allocated memory when needed.

Network Performance

After switching to VirtIO networking, you should be getting near-native network performance. If throughput seems low, check:

# Inside the VM, verify the driver in use
ethtool -i ens18
driver: virtio_net

# Check for TX/RX offloading
ethtool -k ens18 | grep -i offload
tcp-segmentation-offload: on
generic-receive-offload: on

If you're seeing poor performance with the e1000 adapter (because you haven't switched to VirtIO yet), that's expected. E1000 emulates real hardware and tops out around 1 Gbps with high CPU usage. VirtIO will easily saturate a 10 Gbps link with minimal CPU overhead.

A Realistic Migration Timeline

Here's what a typical migration looks like in terms of time, based on my experience with a mix of 10-15 VMs:

Phase Time
Inventory and planning 2-3 hours
Snapshot consolidation 30 min - 2 hours (depends on snapshot depth)
OVF export 1-4 hours (depends on total disk size)
Transfer to Proxmox 1-3 hours (network dependent)
Convert and import 30 min - 2 hours
Boot and fix each VM 15-30 min per VM
VirtIO driver install + switch 10-20 min per Windows VM
Testing 2-4 hours

For a 10-VM environment with a mix of Linux and Windows, budget a full weekend. You can parallelize the export and conversion steps, but the per-VM boot-and-fix phase is sequential and tedious.

Rollback Strategy

Keep your ESXi environment running until you've verified every migrated VM works correctly in Proxmox. I typically run both environments in parallel for at least a week, with the ESXi VMs powered off but available.

Once you're confident everything works:

  1. Power off the ESXi VMs permanently
  2. Wait another week (just in case)
  3. Export the ESXi VMDKs to a backup location if you have the storage
  4. Repurpose the ESXi hardware

Don't wipe the ESXi host the same day you finish migration. Murphy's Law applies double to virtualization migrations.

Wrapping Up

The VMware-to-Proxmox migration is straightforward once you understand the moving parts. The biggest friction points are VirtIO drivers on Windows and the UEFI/BIOS mismatch — everything else is basically copying files and clicking buttons.

What surprised me most about the migration was the performance difference. Several of my VMs actually run faster on Proxmox than they did on ESXi, particularly the Linux ones. KVM's VirtIO stack is incredibly efficient, and not paying the VMware overhead tax on every I/O operation adds up.

The licensing situation was what pushed me to migrate, but the technology is what made me stay. Proxmox does everything I needed ESXi for, the web UI is comparable, and the command-line tools are significantly better. Plus, the fact that it's Debian underneath means every Linux tool and trick I already know just works.

If you're staring at a VMware renewal quote and feeling that sinking feeling, take the plunge. A weekend of migration work beats years of licensing costs.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →