Proxmox Network Bonding: Link Aggregation and Failover

Configure network bonding on Proxmox VE for redundant NICs, LACP aggregation, and active-backup failover. Maximize uptime and throughput for your homelab.

Proxmox Pulse Proxmox Pulse
9 min read
Bundled network cables with glowing data streams showing redundant failover connectivity in a server rack

If your Proxmox host has a single NIC and that NIC dies, your entire hypervisor goes offline — taking every VM and LXC with it. Network bonding fixes that by combining multiple physical interfaces into one logical interface, giving you either redundancy, increased throughput, or both. It's one of the most impactful network configurations you can make on a production or serious homelab Proxmox node.

This guide covers the two most common bonding modes for Proxmox: active-backup (pure failover, works with any switch) and LACP/802.3ad (load balancing + failover, requires a managed switch). You'll also learn how to attach your Linux bridge to the bond so VMs and containers automatically benefit from the redundancy.

Why Bond NICs on Proxmox?

There are two distinct reasons to bond NICs and it's worth being clear about which one you're solving for.

Redundancy: If one NIC or cable fails, traffic automatically shifts to the surviving interface. No manual intervention, no downtime. This is the most compelling reason for a homelab that you actually rely on.

Throughput: With LACP, traffic from multiple connections can use different physical links simultaneously. A single TCP stream still maxes out one link, but aggregate throughput across many VMs or clients scales with the number of links.

For most homelabs, redundancy is the primary goal. For NAS-style workloads or dense VM hosts, throughput matters too.

Prerequisites

Before you start, confirm the following:

  • Two or more physical NICs installed in your Proxmox host
  • Proxmox VE 7.x or 8.x (these steps apply to both)
  • For LACP: a managed switch that supports 802.3ad (virtually all Cisco, Netgear, TP-Link managed switches do)
  • For active-backup: any switch, including cheap unmanaged ones
  • SSH access to your Proxmox host

Identify your NIC names before you start. Run this on your Proxmox host:

ip link show

You'll see output like enp3s0, enp4s0, or eth0, eth1. Note these — you'll use them in the config.

Understanding Proxmox Network Architecture

Proxmox uses Linux network interfaces in a layered stack:

Physical NICs (enp3s0, enp4s0) ↓ Bond Interface (bond0) ↓ Linux Bridge (vmbr0) ↓ VMs and LXC containers

You create a bond from raw NICs, then bridge that bond instead of bridging NICs directly. VMs and containers attach to the bridge as they always have — they don't need to know anything about the bond underneath.

Method 1: Active-Backup Bonding (Any Switch)

Active-backup is the simplest and most compatible bonding mode. One NIC carries all traffic; the other sits idle waiting to take over if the active NIC fails. No switch configuration needed.

Configuring via the Web UI

Proxmox's web interface lets you configure bonding without touching config files directly.

  1. Go to Node → System → Network
  2. Click Create → Bond
  3. Fill in the fields:
    • Name: bond0
    • Slaves: select both NICs (e.g., enp3s0 enp4s0)
    • Mode: active-backup
    • Bond Primary: select your preferred primary NIC
    • Leave Hash Policy empty (not used in active-backup)
  4. Click Create

Now edit your existing bridge (vmbr0):

  1. Click on vmbr0Edit
  2. Change Bridge Ports from your NIC name to bond0
  3. Keep the IP address configuration as-is
  4. Click Apply

Click Apply Configuration at the top to write and activate the changes.

Configuring via /etc/network/interfaces

If you prefer editing config directly or are scripting this:

# Edit the interfaces file
nano /etc/network/interfaces

Replace your existing NIC and bridge config with:

auto lo iface lo inet loopback

auto enp3s0 iface enp3s0 inet manual

auto enp4s0 iface enp4s0 inet manual

auto bond0 iface bond0 inet manual bond-slaves enp3s0 enp4s0 bond-miimon 100 bond-mode active-backup bond-primary enp3s0

auto vmbr0 iface vmbr0 inet static address 192.168.1.10/24 gateway 192.168.1.1 bridge-ports bond0 bridge-stp off bridge-fd 0

Apply without rebooting:

ifreload -a

Verifying Active-Backup

Check that the bond is up and which slave is active:

cat /proc/net/bonding/bond0

You'll see output like:

Bonding Mode: fault-tolerance (active-backup) Primary Slave: enp3s0 (primary_reselect failure) Currently Active Slave: enp3s0

Slave Interface: enp3s0 MII Status: up ... Slave Interface: enp4s0 MII Status: up

To test failover, physically unplug the active NIC. Traffic should shift to the backup within 100–200ms (controlled by bond-miimon, which sets the link monitoring interval in milliseconds).

Method 2: LACP/802.3ad (Managed Switch Required)

LACP (Link Aggregation Control Protocol) gives you both failover and load balancing. Both NICs carry traffic simultaneously. This requires your switch to support 802.3ad and have LACP configured on the ports.

Switch Configuration

The exact steps vary by switch vendor, but the concept is the same — create a port channel or LAG (Link Aggregation Group) and assign both switch ports to it with LACP mode set to active or passive.

Cisco IOS example:

interface GigabitEthernet0/1 channel-group 1 mode active ! interface GigabitEthernet0/2 channel-group 1 mode active ! interface Port-channel1 description Proxmox-bond0 switchport mode access switchport access vlan 1

Netgear/TP-Link managed switches: Look for "Link Aggregation" or "LAG" in the web UI. Create a LAG group, add both ports, enable LACP.

Proxmox LACP Configuration

In /etc/network/interfaces:

auto enp3s0 iface enp3s0 inet manual

auto enp4s0 iface enp4s0 inet manual

auto bond0 iface bond0 inet manual bond-slaves enp3s0 enp4s0 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 bond-lacp-rate fast

auto vmbr0 iface vmbr0 inet static address 192.168.1.10/24 gateway 192.168.1.1 bridge-ports bond0 bridge-stp off bridge-fd 0

Key settings explained:

  • bond-mode 802.3ad: Enables LACP
  • bond-xmit-hash-policy layer2+3: Uses both MAC and IP to distribute traffic across links. Better than layer2 for homelab workloads with multiple VMs
  • bond-lacp-rate fast: Sends LACP PDUs every second instead of every 30 seconds — detects failures faster

Apply the changes:

ifreload -a

Verifying LACP

cat /proc/net/bonding/bond0

With LACP working correctly, both slaves will show as active:

Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0

Slave Interface: enp3s0 MII Status: up Speed: 1000 Mbps Duplex: full ... Slave Interface: enp4s0 MII Status: up Speed: 1000 Mbps Duplex: full

If a slave shows MII Status: down when you know the cable is connected, the switch isn't negotiating LACP. Double-check the switch LAG configuration.

Adding VLAN Support to a Bonded Bridge

If you're running VLANs on top of your bond, enable VLAN awareness on the bridge:

auto vmbr0 iface vmbr0 inet static address 192.168.1.10/24 gateway 192.168.1.1 bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094

With bridge-vlan-aware yes, you can tag VMs and LXCs with specific VLANs in their network device settings — the bridge handles the tagging without needing separate bond sub-interfaces.

Your switch port channel will also need to be configured as a trunk (carrying multiple VLANs) rather than an access port.

Separating Storage and Management Traffic

A common pattern in Proxmox deployments is to use bonded NICs for different traffic types:

enp3s0 + enp4s0 → bond0 → vmbr0 (VM/guest traffic, VLAN aware) enp5s0 + enp6s0 → bond1 → vmbr1 (Ceph/storage replication)

This is especially important if you're running Ceph, PBS replication, or live migration — you don't want a VM's traffic saturating the link during a storage sync.

The configuration simply repeats the same bond pattern with different NICs and a new bridge:

auto bond1 iface bond1 inet manual bond-slaves enp5s0 enp6s0 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 bond-lacp-rate fast

auto vmbr1 iface vmbr1 inet static address 10.10.10.1/24 bridge-ports bond1 bridge-stp off bridge-fd 0

Then configure Ceph, Corosync, and PBS to use the vmbr1 network.

Troubleshooting Common Issues

Bond interface comes up but has no connectivity

Check that your bridge is pointing to the bond, not the raw NIC:

bridge link show

If you see a NIC directly in the bridge instead of bond0, the configuration didn't apply correctly. Re-check /etc/network/interfaces and run ifreload -a again.

LACP bond shows only one slave active

This usually means the switch isn't agreeing on LACP. Check:

bash dmesg | grep bond

You'll see messages about LACP PDU negotiation. If the switch is configured for LACP passive mode and Proxmox is also passive, neither side initiates — set at least one side to active.

ifreload -a breaks connectivity

If you lose SSH access after running ifreload -a, the config has an error. Boot into the Proxmox console (physical or IPMI), check journalctl -u networking for the specific error, and fix /etc/network/interfaces before running ifreload -a again.

Always test config changes with a scheduled reboot command as a safety net:

# Reboot in 5 minutes — cancels if you manually cancel before then
shutdown -r +5 "Testing network config"
# After confirming connectivity
shutdown -c

bond-miimon vs bond-arp-interval

The bond-miimon setting detects link failures at the NIC driver level (physical link). bond-arp-interval tests actual IP reachability by sending ARP probes. For most setups, miimon is sufficient and simpler. Use arp-interval only if you need to detect upstream failures (e.g., the switch is up but its uplink is down).

Performance Considerations

A few things to keep in mind about real-world bonding performance:

LACP doesn't double single-stream throughput. A single TCP connection between two hosts will still use one physical link. The aggregate benefit is visible across multiple simultaneous streams — different VMs talking to different clients.

10GbE bonding is different. If you have 10GbE NICs, a single bond may not help much unless your workload involves many simultaneous high-bandwidth connections. Consider whether a single 10GbE NIC is sufficient before buying a second.

CPU overhead is minimal. Bonding is handled in the kernel and adds negligible overhead even at line rate on 1GbE or 10GbE links.

Conclusion

Network bonding is one of the best reliability improvements you can make to a Proxmox host, and it requires no additional cost if you already have spare NICs. Active-backup gives you immediate failover protection with zero switch configuration — just plug in a second NIC, update /etc/network/interfaces, and you've eliminated your network single point of failure.

LACP takes more effort to configure on the switch side, but pays off with real throughput benefits in busy environments. For a homelab running several VMs, the active-backup mode is usually the right call. Save LACP for nodes that need to push serious aggregate throughput.

The pattern of bond → bridge → VMs scales cleanly to any Proxmox configuration, including VLAN-aware setups and multi-network deployments separating guest, storage, and management traffic. Get the bond right once and the rest of your Proxmox networking layers cleanly on top.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →