Configuring VLANs on Proxmox with Linux Bridges
Step-by-step guide to setting up VLANs on Proxmox VE using Linux bridges. Includes VLAN-aware bridges, trunk ports, and network segmentation for VMs and containers.
On this page
Why VLANs Matter
Flat networks are fine until they aren't. Maybe you added a few IoT devices and realized your security cameras are on the same broadcast domain as your NAS. Maybe you're running a Kubernetes cluster and want to isolate pod traffic from management traffic. Or maybe you just got tired of every device on your network being able to ARP-scan everything else.
VLANs give you Layer 2 segmentation without needing separate physical switches and cables for each network. One physical NIC on your Proxmox host can carry dozens of isolated networks simultaneously. VMs on VLAN 20 can't see traffic on VLAN 30 — the switch enforces that separation in hardware.
I've been running VLANs on Proxmox for about three years now, and the initial setup effort pays for itself immediately in terms of security hygiene and network clarity. Here's how to set it all up.
Prerequisites
Before touching Proxmox, you need a managed switch that supports 802.1Q VLAN tagging. Unmanaged switches won't work — they don't understand VLAN tags and will either drop tagged frames or pass them through unpredictably.
What you need:
- A managed switch with 802.1Q support. TP-Link TL-SG108E, Netgear GS308E, or anything from MikroTik will work. Even a $30 "smart managed" switch handles VLANs fine.
- A trunk port configured on your switch for the uplink to your Proxmox host. This port needs to carry all your VLANs as tagged traffic.
- A router or firewall that can do inter-VLAN routing if you want VLANs to talk to each other. OPNsense or pfSense running on Proxmox itself works great for this.
Switch Configuration
Your Proxmox host connects to the switch on a trunk port — a port that carries multiple VLANs as tagged (802.1Q) traffic. The exact configuration depends on your switch, but here's what it looks like on a typical managed switch.
On a MikroTik (RouterOS):
/interface bridge vlan
add bridge=bridge tagged=ether1,ether24 vlan-ids=10
add bridge=bridge tagged=ether1,ether24 vlan-ids=20
add bridge=bridge tagged=ether1,ether24 vlan-ids=30
add bridge=bridge tagged=ether1,ether24 vlan-ids=40
On a TP-Link smart switch web UI, you'd go to 802.1Q VLAN → VLAN Config, create VLANs 10, 20, 30, 40, and set the Proxmox uplink port as "Tagged" for each VLAN.
The key concept: the port connecting to your Proxmox host must be tagged (trunk) for all VLANs you want to use. Ports connecting to regular devices (a PC on VLAN 20, an IoT device on VLAN 30) are typically untagged (access) on their respective VLAN.
VLAN-Aware vs Traditional Bridges
Proxmox supports two approaches to VLANs, and you should use VLAN-aware bridges. Let me explain why.
Traditional Approach (Don't Do This)
The old way involves creating a separate Linux bridge for each VLAN. For four VLANs, you'd have:
auto vmbr0v10
iface vmbr0v10 inet manual
bridge-ports eno1.10
bridge-stp off
auto vmbr0v20
iface vmbr0v20 inet manual
bridge-ports eno1.20
bridge-stp off
# ...repeat for every VLAN
This works but scales terribly. Each VLAN needs its own bridge, its own VLAN subinterface, and VMs must be assigned to the correct bridge. Adding a new VLAN means editing /etc/network/interfaces, creating new bridges, and restarting networking or rebooting. Painful.
VLAN-Aware Bridge (Do This)
A single VLAN-aware bridge handles all VLANs. VMs specify their VLAN tag in their network config. Adding a new VLAN is just a matter of using it — no bridge changes needed.
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
That bridge-vids 2-4094 line means the bridge accepts any VLAN tag. You can restrict it if you want:
bridge-vids 10 20 30 40
I'd recommend keeping it open (2-4094) unless you have a specific security reason to restrict. The VLAN isolation happens at the switch level anyway.
Setting Up the Network Configuration
Here's a complete, realistic /etc/network/interfaces for a Proxmox host with VLAN-aware bridging. This host has its management IP on VLAN 10.
# /etc/network/interfaces
auto lo
iface lo inet loopback
# Physical interface - no IP, just the bridge port
auto eno1
iface eno1 inet manual
# VLAN-aware bridge
auto vmbr0
iface vmbr0 inet static
address 10.10.10.5/24
gateway 10.10.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
source /etc/network/interfaces.d/*
Wait — notice that the bridge itself has an IP address. That's the management IP for the Proxmox host, and it's on the native (untagged) VLAN of the trunk port. If you want your management interface on a specific VLAN instead, you need a VLAN subinterface on the bridge:
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
# Management IP on VLAN 10
auto vmbr0.10
iface vmbr0.10 inet static
address 10.10.10.5/24
gateway 10.10.10.1
This second approach is cleaner. The bridge itself has no IP — it's just a switching fabric. The host's management traffic goes out tagged as VLAN 10.
Critical warning: If you're doing this remotely over SSH, get it right the first time. A misconfigured network interface will lock you out, and you'll need physical console access or IPMI to fix it. I test network changes by applying them with a revert timer:
# Apply changes with a 60-second automatic revert
cp /etc/network/interfaces /etc/network/interfaces.bak
# Make your changes, then:
( sleep 60 && cp /etc/network/interfaces.bak /etc/network/interfaces && ifreload -a ) &
ifreload -a
If everything works, kill the background revert job. If you lose connectivity, wait 60 seconds and the old config comes back. This has saved me from multiple drives to the datacenter.
Applying Changes
After editing /etc/network/interfaces, apply without rebooting:
ifreload -a
Proxmox uses ifupdown2, which supports live reconfiguration. Verify the bridge is VLAN-aware:
bridge vlan show
port vlan-id
eno1 1 PVID Egress Untagged
10
20
30
40
vmbr0 1 PVID Egress Untagged
10
20
30
40
If you see your VLAN IDs listed, the bridge is working correctly.
Assigning VLANs to VMs
Now the easy part. When you create or edit a VM's network interface in PVE:
- Go to VM → Hardware → Network Device
- Set Bridge:
vmbr0 - Set VLAN Tag:
20(or whatever VLAN this VM belongs on)
That's it. The VM's traffic will be tagged with VLAN 20 as it exits through the bridge. The switch receives tagged frames and forwards them only to other ports carrying VLAN 20.
For containers (LXC), it's the same: Container → Network → Edit → VLAN Tag.
You can also set it from the CLI:
# VM
qm set 100 --net0 virtio,bridge=vmbr0,tag=20
# Container
pct set 200 --net0 name=eth0,bridge=vmbr0,tag=20,ip=10.20.20.10/24,gw=10.20.20.1
VLAN Trunk to a VM
Sometimes you need a VM to receive traffic from multiple VLANs — a router VM, a monitoring server, or a Docker host running containers on different networks. Instead of assigning a single VLAN tag, you pass a trunk:
In the VM hardware config, leave the VLAN Tag field empty (no tag). Then inside the VM, create VLAN subinterfaces:
# Inside the VM (e.g., an OPNsense or Ubuntu router VM)
ip link add link eth0 name eth0.10 type vlan id 10
ip link add link eth0 name eth0.20 type vlan id 20
ip link add link eth0 name eth0.30 type vlan id 30
ip link set eth0.10 up
ip link set eth0.20 up
ip link set eth0.30 up
The VM receives all tagged traffic and handles VLAN separation internally. This is exactly how you'd set up OPNsense or pfSense as your inter-VLAN router inside Proxmox.
Example Network Design
Here's the VLAN layout I run in my homelab. Yours will differ, but this should give you a template:
| VLAN ID | Subnet | Purpose | Examples |
|---|---|---|---|
| 10 | 10.10.10.0/24 | Management | Proxmox hosts, switches, IPMI, PBS |
| 20 | 10.20.20.0/24 | Servers / Services | Web servers, databases, Docker hosts |
| 30 | 10.30.30.0/24 | IoT / Untrusted | Cameras, smart home devices, sensors |
| 40 | 10.40.40.0/24 | Guest WiFi | Guest wireless clients |
VLAN 10 — Management
Only infrastructure devices live here. Proxmox nodes, the managed switch management interface, IPMI/iDRAC ports, and PBS. Access to this VLAN is tightly controlled — no other VLAN can initiate connections into VLAN 10 except through specific firewall rules (e.g., allowing VLAN 20 servers to reach PBS on port 8007 for backups).
VLAN 20 — Servers
Production services go here. Web apps, databases, monitoring stacks, reverse proxies. These can reach the internet through the router and can reach specific services on other VLANs as needed (like DNS on VLAN 10).
VLAN 30 — IoT
The IoT ghetto. Smart plugs, temperature sensors, security cameras — anything that phones home to some cloud service or has questionable firmware. This VLAN can reach the internet (most IoT devices need it) but cannot initiate connections to VLANs 10 or 20. Period.
VLAN 40 — Guest
Guest WiFi clients land here. Internet access only. No access to any other VLAN. Simple and clean.
Inter-VLAN Routing and Firewall Rules
VLANs without firewall rules between them are just cosmetic. You need a router or firewall doing inter-VLAN routing to control what talks to what.
If you're running OPNsense or pfSense as a VM on Proxmox (a very common setup), give it a trunk port (no VLAN tag) and configure VLAN interfaces inside the firewall. Then set rules like:
# OPNsense firewall rules (conceptual)
# VLAN 20 (Servers) can reach VLAN 10 (Mgmt) only for DNS and NTP
PASS VLAN20 -> 10.10.10.1:53 UDP # DNS
PASS VLAN20 -> 10.10.10.1:123 UDP # NTP
# VLAN 30 (IoT) - Internet only, no other VLANs
PASS VLAN30 -> !RFC1918 ANY # Internet only
BLOCK VLAN30 -> ANY ANY # Block everything else
# VLAN 40 (Guest) - Internet only
PASS VLAN40 -> !RFC1918 ANY
BLOCK VLAN40 -> ANY ANY
The !RFC1918 trick is a common pattern — it means "allow traffic to any destination that's NOT a private IP range," which effectively means "internet only."
Testing Connectivity
After setting everything up, verify from inside VMs on different VLANs:
# From a VM on VLAN 20 (10.20.20.15)
ping 10.20.20.1 # Gateway - should work
ping 10.10.10.5 # Proxmox mgmt - should work only if firewall allows
ping 10.30.30.10 # IoT device - should be blocked
Check VLAN tagging is working correctly from the Proxmox host:
tcpdump -i eno1 -e -n vlan
14:23:01.445123 aa:bb:cc:dd:ee:f1 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 64: vlan 20, p 0, ethertype ARP, Request who-has 10.20.20.1 tell 10.20.20.15
14:23:01.445456 aa:bb:cc:dd:ee:f2 > aa:bb:cc:dd:ee:f1, ethertype 802.1Q (0x8100), length 64: vlan 20, p 0, ethertype ARP, Reply 10.20.20.1 is-at aa:bb:cc:dd:ee:f2
You should see vlan XX in the tcpdump output, confirming frames are being tagged correctly.
Common Pitfalls
MTU issues. VLAN tagging adds 4 bytes to the Ethernet frame. If your switch and NIC support jumbo frames, this usually isn't a problem. But if you're running at exactly 1500 MTU, you might see fragmentation or dropped packets for full-size frames. Setting MTU to 1504 on the physical interface and switch trunk port avoids this, though most modern hardware handles it transparently.
Native VLAN confusion. The "native VLAN" or "PVID" is the VLAN used for untagged traffic. By default, this is VLAN 1. If your management IP is on the bridge without a VLAN subinterface, it's using the native VLAN. Make sure your switch's native VLAN on the trunk port matches what Proxmox expects. Mismatches here cause mysterious connectivity loss.
Forgetting to tag the switch port. I've done this more times than I'd like to admit. You add VLAN 50 to a VM in Proxmox, the VM gets no network — because the switch trunk port isn't carrying VLAN 50. Always configure both ends.
LXC containers and VLAN tags. Some container templates don't handle VLAN-tagged networks out of the box. If a container isn't getting an IP via DHCP on a tagged VLAN, check that the container's network config specifies the correct gateway for that VLAN's subnet.
Where to Go From Here
Once you're comfortable with basic VLANs, look into:
- Proxmox SDN (Software Defined Networking) — available since PVE 7.x, it provides a higher-level abstraction for managing VLANs and VXLANs across multi-node clusters. Still maturing but promising for larger setups.
- LACP bonding with VLANs — aggregate multiple NICs for bandwidth and redundancy, then run VLANs over the bond. The config is a bit more involved but follows the same principles.
- Private VLANs — for isolating VMs from each other within the same VLAN. Useful for multi-tenant setups.
VLANs are foundational. Once you have proper segmentation in place, everything else — firewall rules, monitoring, access control — becomes cleaner and more manageable. The 30 minutes it takes to set up is time very well spent.