OPNsense on Proxmox: The Ultimate Firewall VM Guide
Deploy OPNsense as a Proxmox VM with NIC passthrough, VLAN trunking, and full firewall control. Better than bare metal — here's why and how.
On this page
Running OPNsense on bare metal is fine — until you want to snapshot your firewall config before a risky upgrade, clone it to a test environment, or recover it in minutes after a hardware failure. Virtualizing OPNsense on Proxmox gives you all of that, and with proper NIC passthrough or VLAN trunking, you won't sacrifice a single packet of performance. This guide walks you through the full setup from ISO download to a production-ready firewall VM.
Why Run OPNsense as a Proxmox VM?
The "it should run on dedicated hardware" crowd has a point — your firewall is critical infrastructure. But Proxmox changes the calculus entirely.
Virtualizing OPNsense means:
- Snapshots before upgrades — roll back a broken OPNsense update in 30 seconds
- Full config portability — export the VM and restore it on any Proxmox node
- Hardware consolidation — one beefy server replaces a dedicated firewall box
- Live migration — move OPNsense between cluster nodes without dropping connections
The one real concern is a "noisy neighbor" scenario where other VMs compete for CPU or I/O during heavy firewall load. The fix is simple: pin OPNsense to dedicated CPU cores and use NIC passthrough for the WAN interface. More on both below.
Hardware and Network Planning
Before touching Proxmox, sketch out your network topology. OPNsense needs at minimum two network interfaces:
- WAN — connects to your modem/ISP router
- LAN — connects to your internal switch
For a homelab or small office, a common layout looks like this:
ISP Modem → [WAN port] OPNsense VM [LAN port] → Managed Switch ├── VLAN 10 (Trusted) ├── VLAN 20 (IoT) └── VLAN 30 (DMZ)
You have two hardware approaches:
Option A: NIC Passthrough (recommended for WAN) Pass a physical NIC directly to the OPNsense VM using PCIe passthrough (IOMMU). The VM gets exclusive, near-native access to that NIC. Best for the WAN interface where latency matters most.
Option B: VirtIO + Linux Bridge (flexible for LAN) Create a Proxmox Linux bridge connected to a physical port or trunk port on your switch. OPNsense sees a virtual NIC backed by the bridge. Works great for LAN and VLAN trunking.
For most setups, the sweet spot is NIC passthrough for WAN and a trunk bridge for LAN.
Enabling IOMMU for NIC Passthrough
Passthrough requires IOMMU support. If you haven't enabled it yet, here's the quick version.
Intel Systems
Edit the GRUB config:
nano /etc/default/grub
Find GRUB_CMDLINE_LINUX_DEFAULT and add intel_iommu=on iommu=pt:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
Update GRUB and reboot:
bash update-grub reboot
AMD Systems
AMD IOMMU is usually enabled in BIOS (look for AMD-Vi or SVM Mode). Add iommu=pt to the kernel line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
Verify IOMMU is Active
After reboot, confirm it's working:
bash dmesg | grep -e DMAR -e IOMMU | head -20
You should see lines like DMAR: IOMMU enabled or AMD-Vi: AMD IOMMUv2 loaded. Also check IOMMU groups to confirm your NIC is in its own group:
for d in /sys/kernel/iommu_groups/*/devices/*; do
echo "Group $(basename $(dirname $d)): $(lspci -nns ${d##*/})"
done | grep -i eth
A NIC in a shared IOMMU group with other devices needs an ACS override patch — or just use the VirtIO bridge approach instead.
Creating the OPNsense VM
Download the OPNsense ISO
Grab the latest DVD ISO from the official OPNsense download page. Upload it to Proxmox:
# On Proxmox host, download directly to local storage
wget -O /var/lib/vz/template/iso/OPNsense-25.1-dvd-amd64.iso \
https://mirror.ams1.nl.leaseweb.net/opnsense/releases/25.1/OPNsense-25.1-dvd-amd64.iso
Or upload via the Proxmox web UI: Datacenter → your node → local storage → ISO Images → Upload.
VM Creation Settings
Create a new VM with these recommended settings:
| Setting | Value |
|---|---|
| OS Type | Other |
| BIOS | OVMF (UEFI) or SeaBIOS |
| CPU | 2–4 cores, type host |
| RAM | 2–4 GB (4 GB if using IDS/IPS) |
| Disk | 32 GB, VirtIO SCSI |
| Network | Add interfaces after creation |
Using CPU type host exposes the full instruction set to OPNsense, which matters for AES-NI crypto acceleration in VPN tunnels.
Add the WAN Interface via PCIe Passthrough
In the VM's Hardware tab, click Add → PCI Device. Select your WAN NIC from the list. Enable:
- All Functions — passes all functions of the NIC (important for multi-port NICs)
- ROM-Bar — usually needed for proper passthrough
- Primary GPU — leave unchecked
For a single-port NIC, that's it. The OPNsense VM now owns that NIC completely.
Add the LAN Interface via Linux Bridge
On your Proxmox host, you should have a bridge for your LAN trunk. If not, create one:
In System → Network on your Proxmox node, add a Linux Bridge:
- Name:
vmbr1 - Bridge ports: your LAN-facing NIC (e.g.,
enp3s0) - VLAN aware: checked ✓
Back in the OPNsense VM hardware, add a Network Device:
- Bridge:
vmbr1 - Model: VirtIO
- VLAN Tag: leave blank (trunk — OPNsense handles the tags)
Installing OPNsense
Start the VM and open the console. Boot from the ISO. You'll land at a login prompt — use installer / opnsense to start the setup wizard.
Follow the installer:
- Keymap — choose your keyboard layout
- Install (UFS or ZFS) — ZFS is overkill for a firewall VM; UFS is fine
- Select disk — pick your VirtIO disk (
vtbd0or similar) - Root password — set a strong one
- Reboot and remove the ISO
After reboot, OPNsense drops you into a console menu. Use option 1 to assign interfaces:
- WAN → your passthrough NIC (will show as
em0,igb0, or similar) - LAN → your VirtIO NIC (shows as
vtnet0)
Then use option 2 to set the LAN IP address. A typical homelab LAN is 192.168.1.1/24. OPNsense will start a DHCP server on that range automatically.
Accessing the Web UI
From a machine on the LAN side, browse to https://192.168.1.1. Default credentials are root / opnsense. The setup wizard walks you through:
- Hostname and DNS
- Time zone
- WAN interface type (DHCP, PPPoE, static)
- LAN IP confirmation
- Root password change
Complete the wizard and you're in.
VLAN Configuration for Network Segmentation
This is where OPNsense on Proxmox shines. You can carve up your network into isolated VLANs without touching physical switch configs beyond enabling trunking.
Create VLANs in OPNsense
Go to Interfaces → Other Types → VLAN. Add a VLAN for each segment:
| VLAN Tag | Description | Subnet |
|---|---|---|
| 10 | Trusted | 10.0.10.0/24 |
| 20 | IoT | 10.0.20.0/24 |
| 30 | DMZ | 10.0.30.0/24 |
For each, set the parent interface to your LAN VirtIO NIC (vtnet0).
Assign and Enable VLAN Interfaces
Under Interfaces → Assignments, add each VLAN as a new interface. Then enable them one by one under Interfaces → [VLAN10], etc. Set a static IP (e.g., 10.0.10.1/24) as the gateway for each segment and enable DHCP under Services → DHCPv4.
Firewall Rules for VLAN Isolation
OPNsense's firewall rules are interface-based (traffic entering an interface). A basic isolation setup:
Allow trusted VLAN to reach anything:
Interface: VLAN10 Source: VLAN10 net Destination: any Action: Pass
Block IoT from reaching trusted VLAN:
Interface: VLAN20 Source: VLAN20 net Destination: VLAN10 net Action: Block
Allow IoT to reach internet only:
Interface: VLAN20 Source: VLAN20 net Destination: !RFC1918 Action: Pass
The !RFC1918 alias (built into OPNsense) matches everything except private IP ranges — a clean way to allow internet while blocking lateral movement.
CPU Pinning for Predictable Performance
If OPNsense shares a host with heavy workloads, pin it to dedicated cores to avoid latency spikes. Edit the VM config directly:
# On Proxmox host
nano /etc/pve/qemu-server/<VMID>.conf
Add CPU pinning (pins VM cores to physical cores 0 and 1):
cpuunits: 2048 cpulimit: 2 numa: 0
For hard pinning, use taskset after the VM starts, or use the cpuset cgroup approach. For most homelabs, simply giving OPNsense higher cpuunits is enough.
Snapshots: The Killer Feature
This is why you virtualized OPNsense in the first place. Before any major change:
# CLI snapshot
qm snapshot <VMID> pre-upgrade-25.1 --description "Before OPNsense 25.1 upgrade"
Or use the web UI: VM → Snapshots → Take Snapshot.
If the upgrade breaks something, rollback takes 10 seconds:
qm rollback <VMID> pre-upgrade-25.1
Your entire firewall — config, rules, packages, certificates — is restored exactly as it was. No re-importing backups, no re-entering API keys.
Performance Considerations
VirtIO drivers give excellent throughput for the LAN side — typically within 5–10% of bare metal for a homelab's traffic levels. For WAN, NIC passthrough eliminates the virtualization layer entirely.
A few tuning tips:
- Enable hardware offloading in OPNsense under System → Settings → Miscellaneous → Hardware CRC and Hardware TSO/LRO — VirtIO supports these
- Set the VM's network queue count to match your CPU core count for high-throughput scenarios
- Use
hostCPU type to expose AES-NI to OPNsense (massive speedup for IPsec/WireGuard VPNs)
For a gigabit WAN connection, a 2-core VM easily handles line rate with headroom for IDS. Suricata with full rulesets needs 4 cores and 4 GB RAM.
Backup and Disaster Recovery
Beyond snapshots, set up Proxmox Backup Server to back up the OPNsense VM on a schedule. A nightly backup means your worst-case recovery scenario is restoring from yesterday.
Also export OPNsense's own config backup regularly via System → Configuration → Backups. Store the XML config off-site. Restoring OPNsense config to a fresh VM takes under 5 minutes.
Conclusion
OPNsense on Proxmox isn't a compromise — it's an upgrade. You get a fully capable, enterprise-grade firewall with the operational benefits of virtualization: snapshots, backups, live migration, and hardware consolidation. NIC passthrough for the WAN port ensures you're not leaving performance on the table, while VLAN trunking through a VirtIO bridge keeps your LAN segmentation flexible.
The setup takes an afternoon the first time, but the payoff is a firewall you can iterate on fearlessly. Break something? Roll back. Upgrading OPNsense? Snapshot first. Moving to new hardware? Export the VM. That kind of operational confidence is worth far more than whatever you'd gain from a dedicated firewall box sitting in a corner.