Proxmox Open vSwitch Setup for Advanced VM Networking
Set up Open vSwitch on Proxmox VE 9.1 for per-port VLAN access mode, port mirroring, and VXLAN overlay tunnels — step-by-step with real CLI commands.
On this page
Open vSwitch (OVS) gives Proxmox three capabilities the default Linux bridge cannot match: per-port VLAN access mode, port mirroring to a dedicated monitoring VM, and VXLAN overlay tunnels for multi-node flat networks. By the end of this guide you will have a working OVS bridge on Proxmox VE 9.1 with at least one VM assigned to a VLAN access port — and a clear picture of exactly when the added complexity pays off versus when to stay with Linux bridges.
Key Takeaways
- OVS advantage: Per-port VLAN assignment, port mirroring, and VXLAN tunnels that Linux bridges do not support natively.
- Version: Open vSwitch 3.3 ships in Proxmox VE 9.1's Debian 13 base — no third-party repo required.
- Top gotcha: Always open a serial console (IPMI/iDRAC/iLO) before editing
/etc/network/interfaceson a live node — misconfiguration kills SSH access instantly. - SDN conflict: Proxmox SDN and manual OVS configuration conflict on the same bridge — pick one approach per node.
- Simpler path: For basic VLAN trunking only, configuring VLANs on Proxmox with Linux bridges is lower-risk and fully sufficient.
When OVS Is Worth the Complexity
Linux bridges handle VLAN trunking and basic isolation well. Switch to OVS when you need at least one of these:
- Access-port VLAN assignment — the virtual switch port drops traffic into a specific VLAN; the guest sees plain untagged Ethernet and needs no in-guest VLAN configuration
- Port mirroring — copy all frames from a production VM's tap interface to an IDS or monitoring VM (Zeek, Suricata inline mode) without touching the production guest
- VXLAN between nodes — L2 overlay tunnels for VMs on separate physical hosts without a full Ceph or shared storage fabric
- QoS policing — rate-limit a specific VM's uplink at the virtual switch layer, not inside the guest
If none of those scenarios apply, stay with Linux bridges. OVS misconfiguration is the fastest way to lock yourself out of a remote node with no graceful recovery path short of a serial console or physical keyboard.
Installing Open vSwitch on Proxmox VE 9.1
OVS 3.3 is in the Debian 13 main repository — no extra sources needed:
apt update
apt install openvswitch-switch -y
Verify the daemon is running:
systemctl status ovs-vswitchd
Check the exact version:
ovs-vsctl --version
# ovs-vsctl (Open vSwitch) 3.3.x
The ovs-vsctl tool is your control plane for all bridge and port configuration. Unlike brctl, it writes to ovsdb-server, a persistent database that survives ovs-vswitchd restarts. Think of ovsdb-server as the single source of truth for your virtual switch topology.
How to Configure the OVS Bridge in /etc/network/interfaces
Open a serial console before you touch anything. IPMI, iDRAC, iLO — whatever your hardware provides. A single typo in /etc/network/interfaces will take down the management IP and leave you with no SSH path back in. This is not a theoretical risk; it is how most OVS-on-Proxmox incidents start.
Here is the minimal working configuration: one physical NIC (enp3s0) uplinked into an OVS bridge (vmbr0), with the Proxmox host management IP on the bridge itself.
auto lo
iface lo inet loopback
auto enp3s0
iface enp3s0 inet manual
ovs_type OVSPort
ovs_bridge vmbr0
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
dns-nameservers 1.1.1.1
ovs_type OVSBridge
ovs_ports enp3s0
Apply without rebooting:
ifreload -a
If ifreload is not available on your node:
ifdown vmbr0 enp3s0 && ifup enp3s0 vmbr0
Verify the bridge is up and the physical port is attached:
ovs-vsctl show
Expected output:
Bridge vmbr0
Port enp3s0
Interface enp3s0
Port vmbr0
Interface vmbr0
type: internal
The type: internal port is how the Proxmox host IP lives on the bridge — it is an in-kernel virtual port, not a separate tap device. If you see it, your management IP is on the OVS bridge and SSH should work normally.
Assigning VMs to VLAN Access Ports
This is where OVS earns its complexity overhead. Assign a specific VLAN tag to a VM's tap interface and the guest sees completely untagged Ethernet — zero guest-side VLAN configuration needed.
In the Proxmox GUI, create or edit the VM and attach its NIC to bridge vmbr0 with no VLAN tag set. Then from the host shell:
# VM 100, first NIC creates tap interface tap100i0
ovs-vsctl set port tap100i0 tag=20
Confirm the assignment:
ovs-vsctl list port tap100i0 | grep tag
# tag : 20
For a firewall or router VM that needs to receive multiple tagged VLANs — pfSense, OPNsense — configure a trunk port instead:
ovs-vsctl set port tap200i0 trunks=10,20,30
Persisting VLAN Assignments Across Reboots
OVS database entries survive ovs-vswitchd restarts but not full host reboots unless you persist them. The simplest approach for a homelab is up hooks in /etc/network/interfaces:
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
ovs_type OVSBridge
ovs_ports enp3s0
up ovs-vsctl set port tap100i0 tag=20 || true
up ovs-vsctl set port tap200i0 trunks=10,20,30 || true
The || true prevents the bridge bring-up from failing when a tap interface does not yet exist at boot (it will not — VMs start after networking). For larger setups with many VMs, a systemd oneshot service running after pve-guests.target is more reliable than per-interface hooks.
How to Mirror VM Traffic to a Monitoring VM
Scenario: VM 100 is a production web server, VM 300 runs Zeek for network traffic analysis. You want all of VM 100's traffic mirrored to VM 300's NIC without touching the web server.
ovs-vsctl \
-- --id=@src get port tap100i0 \
-- --id=@dst get port tap300i0 \
-- --id=@mirror create mirror name=web-mirror \
select-src-port=@src select-dst-port=@src \
output-port=@dst \
-- add bridge vmbr0 mirrors @mirror
Verify the mirror is active:
ovs-vsctl list mirror
Inside VM 300, put the NIC in promiscuous mode and point Zeek at it. The mirrored frames arrive unmodified — no VLAN stripping, no encapsulation. Expect roughly 5–8% CPU overhead on the host under sustained traffic due to the frame duplication path.
To remove the mirror when done:
ovs-vsctl clear bridge vmbr0 mirrors
Setting Up a VXLAN Overlay Between Two Proxmox Nodes
VXLAN creates a virtual L2 segment over an existing L3 connection, letting VMs on two separate physical hosts share a broadcast domain. This is the lightweight alternative to a full Ceph fabric when you are building a private cloud at home with Proxmox and want VM-to-VM flat networking without shared storage dependencies.
Configuration:
- Node A management IP:
192.168.1.100 - Node B management IP:
192.168.1.101 - VNI (VXLAN Network Identifier):
100 - Overlay bridge name:
vxbr0
On Node A:
ovs-vsctl add-br vxbr0
ovs-vsctl add-port vxbr0 vxlan0 \
-- set interface vxlan0 type=vxlan \
options:remote_ip=192.168.1.101 \
options:key=100 \
options:dst_port=4789
On Node B:
ovs-vsctl add-br vxbr0
ovs-vsctl add-port vxbr0 vxlan0 \
-- set interface vxlan0 type=vxlan \
options:remote_ip=192.168.1.100 \
options:key=100 \
options:dst_port=4789
VMs attached to vxbr0 on either node are now on the same L2 segment. Ping between them to confirm. Expect 5–8% throughput loss versus native L2 on a 10 GbE link due to VXLAN encapsulation — acceptable for almost all services except high-frequency storage traffic.
Persist the overlay bridge in /etc/network/interfaces on each node:
auto vxbr0
allow-ovs vxbr0
iface vxbr0 inet manual
ovs_type OVSBridge
allow-vxbr0 vxlan0
iface vxlan0 inet manual
ovs_type OVSPort
ovs_bridge vxbr0
ovs_options type=vxlan options:remote_ip=192.168.1.101 options:key=100 options:dst_port=4789
OVS vs Linux Bridge: Feature Comparison
| Feature | Linux Bridge | Open vSwitch |
|---|---|---|
| VLAN trunking | Yes | Yes |
| Per-port VLAN (access mode) | Requires SDN or manual ip link hacks |
Native |
| Port mirroring | No | Yes |
| VXLAN tunnels | Partial (ip link add ... type vxlan) |
Native, composable |
| QoS policing | Limited (tc only) |
Built-in via OVS queue config |
| Proxmox GUI VM attachment | Full | Full |
| Port-level config | GUI | CLI only |
| Configuration file | /etc/network/interfaces |
OVS DB + /etc/network/interfaces |
| Misconfiguration risk | Low | Higher — lockout possible |
The practical takeaway: Proxmox's GUI sees OVS bridges as ordinary bridges for VM attachment. You assign VMs normally in the GUI, then fine-tune port behavior from the CLI. That hybrid workflow is comfortable within a week of daily use.
Troubleshooting Common OVS Issues on Proxmox
OVS bridge does not come up after reboot. Verify that openvswitch-switch starts before the networking service and is enabled:
systemctl status openvswitch-switch
systemctl enable openvswitch-switch
If the service starts too late relative to networking.service, add an explicit After=openvswitch-switch.service ordering to a networking drop-in under /etc/systemd/system/networking.service.d/.
Tap interfaces missing from OVS after VM restart. Proxmox creates tap devices on VM start and destroys them on VM stop. Your /etc/network/interfaces up hooks fired at boot when no taps existed yet. Use the || true idiom above, or write a QEMU hook script at /etc/pve/qemu-server/hooks/ to apply port config each time a specific VM starts.
Management IP disappeared after switching to OVS. The physical NIC stanza must be inet manual with the IP address only in the bridge stanza. Diagnose from the serial console:
ip addr show enp3s0
ip addr show vmbr0
If the IP is on the NIC instead of the bridge, edit /etc/network/interfaces from the serial console and run ifreload -a.
Proxmox SDN module conflicts with manual OVS. If you have used Proxmox SDN on this node, it may attempt to manage the same bridge names. Use SDN exclusively or disable the SDN controller for this node — mixing manual OVS configuration with SDN on the same bridge produces unpredictable results that are difficult to debug remotely.
Hardening the OVS Configuration
By default, OVS enables STP on all bridges and is ready to accept an external OpenFlow controller connection. On a standalone homelab node you want neither:
# Disable STP on the management bridge
ovs-vsctl set bridge vmbr0 stp_enable=false
# Remove any external OpenFlow controller pairing
ovs-vsctl del-controller vmbr0
# Verify OVSDB is not listening on a network socket (empty output = correct)
ovs-vsctl get-manager
If get-manager returns a TCP address, remove it:
ovs-vsctl del-manager
For host-level nftables firewall rules and SSH hardening that complement OVS, see Hardening Proxmox VE: Firewall, fail2ban, and SSH Security — the host firewall rules apply identically whether you use Linux bridges or OVS underneath. For a broader look at hypervisor attack surface including virtual NIC escape vectors, LOLPROX: Protecting Proxmox from Hypervisor Exploits covers the threat model in detail.
Conclusion
Open vSwitch on Proxmox VE 9.1 is the right tool when you need access-port VLAN assignment, port mirroring for an IDS VM, or VXLAN overlays between nodes — and it is overkill for everything else. The install is a single apt install openvswitch-switch, the configuration lives in the same /etc/network/interfaces file you already use, and the per-port CLI workflow becomes routine within an afternoon. The immediate next step: attach a firewall VM to vmbr0 as a trunk port carrying VLANs 10, 20, and 30, then set your production workload VMs to access ports on their respective VLANs — that is where the isolation model fully clicks into place.