Migrate Hyper-V VMs to Proxmox VE Step by Step

Export Hyper-V VMs, convert VHDX to qcow2, and import into Proxmox VE 9 without reinstalling. Covers VirtIO drivers, Gen 2 UEFI VMs, and Windows activation.

Proxmox Pulse Proxmox Pulse
11 min read
hyper-v vm-migration vhdx qemu-img virtio
Two server towers connected by glowing data streams representing virtual machine migration.

Migrating from Hyper-V to Proxmox doesn't require starting from scratch. Export your VMs from Hyper-V, convert the VHDX disk images with qemu-img, import them using qm importdisk, and boot. Most Linux guests come up on the first try; Windows VMs need VirtIO drivers and occasionally a licensing touchup. By the end of this guide you'll have your first Hyper-V workload running on Proxmox VE 9 with no data loss and no OS reinstall.

Key Takeaways

  • Export format: Hyper-V uses VHDX; Proxmox works with raw or qcow2 — qemu-img convert handles the translation.
  • Linux guests: Boot cleanly with no driver changes; switching to virtio-net is optional but improves throughput.
  • Windows guests: Require VirtIO drivers post-import; use the Fedora VirtIO ISO to install them inside the running guest.
  • Snapshots: Merge all Hyper-V checkpoints before export — the avhd/avhdx chain will break the VHDX if left intact.
  • Generation 2 VMs: Use UEFI; match the firmware type when creating the Proxmox VM shell or the bootloader won't find the disk.

Why Hyper-V Admins Are Moving to Proxmox

The calculus has shifted. Running Hyper-V on bare metal still requires a Windows Server host OS, which means CALs, activation, and Windows Update on your hypervisor. Proxmox VE runs Debian under the hood, has no per-VM licensing, and gives you KVM virtualization plus LXC containers from a single web UI — no Microsoft dependency anywhere in the stack.

For mixed workloads, the integration model is tighter too. Running containers directly alongside VMs without nested virtualization is a significant operational improvement. Running Docker Inside LXC Containers on Proxmox shows what that looks like in practice, and it's one of the first things you'll want to set up after your VMs are migrated.

What You Need Before You Start

Before touching anything in production, confirm you have:

  • Proxmox VE 9 installed and reachable. If you're starting fresh, How to Install Proxmox VE on Any Hardware covers the full install from ISO to first login.
  • Disk space for exports: Hyper-V allocates full VHDX capacity on export — a 500 GB dynamic disk can export as up to 500 GB even if only 80 GB is used. Plan accordingly.
  • qemu-img on a Linux machine (or WSL2 on Windows). On Proxmox itself, it's pre-installed. On Debian/Ubuntu: sudo apt install qemu-utils.
  • A file transfer path from your Hyper-V host to the Proxmox node — SSH/scp, a shared NFS mount, or an external USB drive all work.
  • The VirtIO ISO if you're migrating Windows guests. Download details are below.

How to Export VMs from Hyper-V

Merge Snapshots First

This is the step most people skip and then regret. If your VM has checkpoints, they live as a chain of .avhd or .avhdx differencing disks alongside the base .vhdx. Exporting without merging produces an incomplete or corrupted image.

In Hyper-V Manager: right-click the VM → Checkpoints → delete all of them. Hyper-V merges the chain automatically when you delete — give it a few minutes per checkpoint on large disks. Once the checkpoint tree is empty, proceed with export.

Export via Hyper-V Manager

Right-click the VM → Export → choose a destination folder. Hyper-V writes a folder structure with:

  • Virtual Machines/ — the VM config XML
  • Virtual Hard Disks/ — the .vhdx disk files
  • Snapshots/ — should be empty after the merge step

Export via PowerShell

Export-VM -Name "web-server-01" -Path "D:\HyperV-Exports\web-server-01"

For bulk exports across all VMs on the host:

Get-VM | Export-VM -Path "D:\HyperV-Exports"

The export pauses disk I/O briefly for consistency. I recommend a clean shutdown for planned migrations rather than a live export — dirty exports are for disaster recovery, not deliberate moves.

Converting VHDX Disks to qcow2

Proxmox natively imports raw and qcow2 disk formats. qcow2 is the better choice: it supports thin provisioning (empty sectors don't waste space), snapshots, and live Proxmox Backup Server backups.

Convert on the Proxmox Node Directly

Copy the VHDX file to the Proxmox node first:

scp "user@hyperv-host:D:/HyperV-Exports/web-server-01/Virtual Hard Disks/web-server-01.vhdx" \
  root@proxmox:/tmp/

Then convert it on the node:

qemu-img convert -f vhdx -O qcow2 -p /tmp/web-server-01.vhdx /tmp/web-server-01.qcow2

The -p flag shows a progress bar. A 100 GB VHDX with 60 GB of actual data takes around 3–5 minutes on a SATA SSD. NVMe-to-NVMe cuts that to under two minutes. Verify the result:

qemu-img info /tmp/web-server-01.qcow2

You'll see virtual size (declared disk size) and disk size (actual space on disk) — the latter should match your used data, not the full VHDX allocation.

Convert on Windows via WSL2

If moving the VHDX to Linux first isn't practical, WSL2 runs qemu-img directly:

# Inside WSL2 (Ubuntu 24.04)
sudo apt update && sudo apt install -y qemu-utils

qemu-img convert -f vhdx -O qcow2 -p \
  "/mnt/d/HyperV-Exports/web-server-01/Virtual Hard Disks/web-server-01.vhdx" \
  /tmp/web-server-01.qcow2

WSL2's I/O translation layer adds roughly 30–40% more time compared to native Linux. Fine for a one-time migration, slow for ten VMs.

How to Import the VM into Proxmox

Step 1: Create the VM Shell

In the Proxmox web UI, create a new VM and uncheck "Add disk" during the wizard — you're importing a disk, not creating one. Set these fields based on the Hyper-V generation:

Setting Linux Guest Windows Gen 1 Windows Gen 2
OS Type Linux 6.x Windows 11/2022 Windows 11/2022
Machine type q35 q35 q35
BIOS SeaBIOS SeaBIOS OVMF (UEFI)
SCSI controller VirtIO SCSI VirtIO SCSI VirtIO SCSI
EFI disk No No Yes

For UEFI (Gen 2) Windows VMs, also check Add EFI disk — Proxmox needs this to store NVRAM variables including Secure Boot state. Note the VM ID the wizard assigns; we'll call it 101.

Step 2: Import the Disk

qm importdisk 101 /tmp/web-server-01.qcow2 local-lvm --format qcow2

Replace local-lvm with your actual storage pool name. Check what's available:

pvesm status

On a ZFS-based setup, use local-zfs. After the import completes you'll see output like:

Successfully imported disk as 'unused0:local-lvm:vm-101-disk-0'

Step 3: Attach the Disk and Set Boot Order

In the web UI: VM 101 → Hardware → unused0 → click Edit → set the bus:

  • VirtIO Block for Linux guests
  • SCSI (with the VirtIO SCSI controller already configured) for Windows guests

Then configure boot order: Options → Boot Order → enable the new disk and move it to first position.

Step 4: Add a Network Adapter

Hyper-V virtual NICs don't carry over. Add one in Hardware → Add → Network Device:

  • VirtIO (paravirtualized) for Linux guests
  • E1000 for Windows guests initially — switch to VirtIO after installing drivers inside the guest

Assign it to the correct Proxmox bridge (vmbr0 for your LAN, or whichever bridge serves that network).

Fixing Windows Guests After Import

The BSOD Problem and How to Avoid It

If you import with a VirtIO disk and boot without drivers, Windows will bluescreen immediately with INACCESSIBLE_BOOT_DEVICE. Two ways around it:

Safe path: Import the disk as SCSI with the lsi controller type. Windows has inbox drivers for it, so the VM boots. Install VirtIO drivers from inside the running guest, then switch the disk and NIC to VirtIO.

Fast path: Mount the VirtIO ISO as a second CD-ROM before the first boot. When Windows hits the BSOD and reboots into WinRE, use the recovery console to load the VirtIO storage driver from the ISO, then boot normally.

Download the VirtIO ISO directly to Proxmox:

wget -P /var/lib/vz/template/iso/ \
  https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

Installing VirtIO Drivers Inside the Guest

Once Windows is running, open Device Manager and point it at the mounted ISO. Install from these subdirectories (adjust for your Windows version):

vioscsi\w11\amd64\    → VirtIO SCSI storage driver (Windows 11)
NetKVM\w11\amd64\     → VirtIO network adapter
Balloon\w11\amd64\    → Memory balloon driver
qxl\w11\amd64\        → QXL display (optional, improves VNC performance)

For Windows Server 2022, use 2k22\amd64 in place of w11\amd64. After installing the storage and network drivers, shut down the VM, change the disk bus to VirtIO Block in Proxmox hardware, and change the NIC to VirtIO. It will come up on VirtIO from that point forward.

Windows Activation After Migration

Expect deactivation. The virtual BIOS fingerprint changed when you moved from Hyper-V to KVM, and Windows ties its activation state to that fingerprint. For volume or MAK licenses:

slmgr /ato

For UEFI OEM keys embedded in physical hardware, the key is tied to that machine's firmware — it won't transfer to a VM. Plan for this before migration day: either apply a volume key, set up a KMS server, or purchase a new license for the migrated instance.

Networking and Storage Equivalents

Hyper-V virtual switches don't map directly to Proxmox, but the translation is straightforward:

Hyper-V Proxmox Equivalent
External virtual switch Linux bridge (vmbr0) with physical NIC uplink
Internal virtual switch Linux bridge without uplink
Private virtual switch Isolated Linux bridge, no uplink
VLAN tagging on NIC VLAN tag field on the VM network device
Storage Spaces mirror ZFS mirror pool or LVM-thin
Hyper-V Replica Proxmox Backup Server replication

If your Hyper-V VMs ran on VLANs, you'll need to recreate the VLAN config on Proxmox bridges before the VMs come online — otherwise the guests will boot into a black-hole network segment. Configuring VLANs on Proxmox with Linux Bridges has the full bridge and VLAN tag configuration walkthrough.

Common Pitfalls

Secure Boot violations: Gen 2 Hyper-V VMs use Secure Boot. Proxmox's OVMF supports it but leaves it disabled by default. If a Windows VM won't boot and shows a Secure Boot error in the VNC console, press Del at the OVMF splash screen, navigate to Device Manager → Secure Boot Configuration, and either disable Secure Boot or enroll the Microsoft certificate database.

Dynamic Memory disappears: Hyper-V's Dynamic Memory doesn't automatically translate to KVM's balloon driver. After installing VirtIO drivers, enable the balloon device in Proxmox (Hardware → Add → VirtIO Balloon). Without it, whatever RAM you set at VM creation is fixed — the guest can't give memory back.

Time sync drift: Hyper-V uses its own enlightenment-based time sync, which disappears after migration. Windows guests will fall back to Windows Time Service syncing from the host. Make sure your Proxmox node is configured to sync from a reliable NTP source so guests inherit accurate time.

Generation 1 vs Generation 2 mismatch: Gen 1 is MBR + legacy BIOS. Gen 2 is GPT + UEFI. If you create the Proxmox VM shell with SeaBIOS but the original was Gen 2, the bootloader won't find the disk. Check the Hyper-V VM's Firmware settings before you export — it's listed right there.

Validating Before Cutover

Don't update DNS or decommission the Hyper-V VM until you've confirmed:

  • The VM boots to login without errors in the VNC console
  • Network works: ping the gateway, ping 1.1.1.1, test internal DNS
  • Application checks pass: database responds, web service returns HTTP 200, scheduled tasks run
  • Backup is configured and tested — Automated Backups with Proxmox Backup Server shows how to schedule and verify backups before you call the migration done

Keep the Hyper-V export on disk for at least one week after cutover. Disk space is cheap; an emergency rollback that takes 10 minutes beats one that takes 10 hours.

Conclusion

The migration from Hyper-V to Proxmox comes down to three commands — Export-VM, qemu-img convert, and qm importdisk — with most of the elapsed time spent on disk I/O rather than configuration. Linux VMs typically just boot; Windows VMs need an extra 30 minutes for VirtIO drivers and a licensing check. Once the first workload is running on Proxmox, set up Proxmox Backup Server for the migrated VMs before moving the next one — that's the right order of operations, not an afterthought.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →