Build a Private Cloud at Home with Proxmox VE
Learn how to build a self-hosted private cloud using Proxmox SDN, VLAN segmentation, and cloud-init templates that rivals AWS in your own homelab.
On this page
Running a few VMs on Proxmox is one thing. Building a private cloud — one that provisions VMs on demand, isolates workloads with proper network segmentation, and behaves like a scaled-down AWS inside your own hardware — is something else entirely. The good news: Proxmox VE gives you every tool you need, and in 2026 the SDN subsystem has matured to the point where this setup is genuinely practical for homelab builders and small shops alike.
This guide walks through combining Proxmox SDN, VLAN-backed network zones, and cloud-init templates to build a self-hosted private cloud that can spin up a new VM in under 60 seconds.
What "Private Cloud" Actually Means Here
When cloud providers talk about a private cloud, they mean isolated, on-demand compute with network segmentation and self-service provisioning. We're building the same thing at home:
- Network isolation — each environment (dev, staging, prod, IoT) lives on its own VLAN and can't talk to the others unless you explicitly allow it
- On-demand provisioning — clone a cloud-init template and a VM is ready with SSH keys, hostname, and IP address already configured
- Centralized routing — a single firewall VM handles inter-VLAN routing and internet access
You don't need a rack full of servers. A single reasonably-specced machine (8+ cores, 32 GB RAM, NVMe storage) is enough to run this entire stack.
Prerequisites
Before diving in, make sure you have:
- Proxmox VE 8.x or 9.x installed and updated
- A managed switch that supports 802.1Q VLAN tagging, or a NIC with multiple ports
- Basic familiarity with Proxmox's web UI and Linux networking
- At least one Debian/Ubuntu cloud image downloaded (we'll use it for cloud-init templates)
If your switch is unmanaged, you can still follow along using Proxmox's software-defined VLANs — you'll just lose physical isolation at the switch level.
Step 1: Plan Your VLAN Layout
Design your network before touching a config file. A simple homelab private cloud might look like this:
| VLAN ID | Name | Subnet | Purpose |
|---|---|---|---|
| 10 | Management | 10.10.10.0/24 | Proxmox hosts, BMC, switches |
| 20 | Dev | 10.10.20.0/24 | Development VMs and containers |
| 30 | Prod | 10.10.30.0/24 | Production workloads |
| 40 | IoT | 10.10.40.0/24 | Smart home devices, isolated |
| 99 | WAN | DHCP | Uplink to your router |
VLAN 10 (management) should be locked down — only accessible from trusted hosts. Your Proxmox web UI, SSH, and any out-of-band management lives here.
Write this down. You'll reference it constantly during setup.
Step 2: Configure the Linux Bridge for VLAN-Aware Mode
Proxmox's default vmbr0 bridge is not VLAN-aware. You need to enable that flag so a single bridge can carry tagged traffic for all your VLANs.
Open System → Network in the Proxmox web UI, click on vmbr0, and enable VLAN aware. Alternatively, edit /etc/network/interfaces directly:
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2/24
gateway 10.10.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
Apply the change:
ifreload -a
With VLAN-aware mode enabled, you can assign any VM or LXC to a specific VLAN by setting the VLAN tag on its network interface — no extra bridges needed.
Step 3: Enable Proxmox SDN
Proxmox SDN (Software Defined Networking) lives under Datacenter → SDN in the web UI. It's the control plane that manages zones, VNets, and subnets from a single interface.
Install the SDN packages if needed
On PVE 8.x you may need to install the SDN components:
apt update && apt install -y libpve-network-perl ifupdown2
On PVE 9.x, SDN is included by default.
Create a VLAN Zone
Navigate to Datacenter → SDN → Zones and create a new zone:
- Type: VLAN
- ID:
localvlans - Bridge:
vmbr0
A VLAN zone tells SDN that this group of networks maps to 802.1Q tags on the specified bridge.
Create VNets for Each VLAN
For each VLAN in your plan, create a VNet under Datacenter → SDN → VNets:
- VNet:
vnet-dev - Zone:
localvlans - Tag:
20
Repeat for each VLAN (prod = tag 30, IoT = tag 40, etc.).
Apply the SDN Config
Click Apply in the SDN panel. This pushes the configuration to /etc/network/interfaces.d/sdn and brings up the virtual interfaces.
You can verify:
ip link show | grep vmbr0
bridge vlan show
You should see VLAN IDs listed against vmbr0.
Step 4: Deploy a Router VM for Inter-VLAN Routing
VMs on different VLANs can't talk to each other by default — that's the point of segmentation. You need a router to control what crosses VLAN boundaries.
The easiest approach: spin up a lightweight VM running pfSense, OPNsense, or even a plain Linux VM with iptables/nftables. OPNsense is a solid choice — it's actively maintained and has a clean UI.
VM Network Configuration
Give your router VM multiple network interfaces, one per VLAN:
net0— VirtIO, Bridge:vmbr0, VLAN Tag:99(WAN, gets IP from your ISP router)net1— VirtIO, Bridge:vmbr0, VLAN Tag:10(Management, e.g., 10.10.10.1)net2— VirtIO, Bridge:vmbr0, VLAN Tag:20(Dev, e.g., 10.10.20.1)net3— VirtIO, Bridge:vmbr0, VLAN Tag:30(Prod, e.g., 10.10.30.1)net4— VirtIO, Bridge:vmbr0, VLAN Tag:40(IoT, e.g., 10.10.40.1)
Each interface gets the gateway IP for that VLAN. VMs on that VLAN point to this IP as their default gateway.
Firewall Rules
In OPNsense (or pfSense), set up rules like these to enforce isolation:
- IoT → Any: Block, except DNS and NTP to the router
- Dev → Prod: Block by default, allow specific ports if needed
- Prod → Management: Block
- Management → All: Allow (trusted admin network)
- Any → WAN: Allow with NAT masquerade
This gives you real cloud-style network isolation — your IoT devices literally cannot reach your production VMs, even if one gets compromised.
Step 5: Build a Cloud-Init Template
Cloud-init is what turns a blank VM into a configured machine. You define SSH keys, hostname, and networking at deploy time — no manual console work needed.
Download a Cloud Image
cd /var/lib/vz/template/iso
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Create the Template VM
# Create a VM shell (ID 9000 is a common convention for templates)
qm create 9000 --name ubuntu-2404-cloud --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
Import the cloud image as the disk
qm importdisk 9000 noble-server-cloudimg-amd64.img local-lvm
Attach the imported disk
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
Add a cloud-init drive
qm set 9000 --ide2 local-lvm:cloudinit
Set boot order
qm set 9000 --boot c --bootdisk scsi0
Enable serial console (needed for cloud-init to work properly)
qm set 9000 --serial0 socket --vga serial0
Convert to template
qm template 9000
Configure Cloud-Init Defaults
In the web UI, go to the template VM → Cloud-Init tab and set:
- User: your admin username (e.g.,
admin) - SSH public key: paste your public key
- IP Config: set to DHCP or a static range depending on your VLAN
- DNS: point to your router's IP for each VLAN
You can also set these via CLI:
qm set 9000 --ciuser admin --sshkeys ~/.ssh/id_ed25519.pub
qm set 9000 --ipconfig0 ip=dhcp
qm set 9000 --nameserver 10.10.20.1
Step 6: Provision VMs On Demand
With the template ready, deploying a new VM is a single command:
# Clone template 9000 to new VM 201, full clone
qm clone 9000 201 --name dev-web-01 --full
Override cloud-init for this specific VM
qm set 201 --ipconfig0 ip=10.10.20.10/24,gw=10.10.20.1 qm set 201 --nameserver 10.10.20.1 qm set 201 --net0 virtio,bridge=vmbr0,tag=20
Resize disk if needed
qm resize 201 scsi0 +20G
Start it
qm start 201
Within 30-60 seconds, the VM boots, cloud-init configures it, and you can SSH in:
ssh admin@10.10.20.10
No ISO, no installer, no manual network config. This is the core of what makes a private cloud feel like a cloud.
Automate with a Shell Script
Wrap this in a script to make provisioning even faster:
#!/bin/bash
# provision-vm.sh — quick VM provisioner
set -euo pipefail
VM_ID="$1" VM_NAME="$2" VLAN="$3" IP="$4" GW="10.10.${VLAN}.1" TEMPLATE=9000
qm clone $TEMPLATE $VM_ID --name "$VM_NAME" --full qm set $VM_ID --net0 virtio,bridge=vmbr0,tag=$VLAN qm set $VM_ID --ipconfig0 ip=${IP}/24,gw=${GW} qm set $VM_ID --nameserver ${GW} qm resize $VM_ID scsi0 +18G qm start $VM_ID
echo "VM $VM_NAME ($VM_ID) started on VLAN $VLAN at $IP"
Usage:
bash provision-vm.sh 202 prod-api-01 30 10.10.30.10
Step 7: DHCP and DNS for Your VLANs
Static IPs work but get tedious. Run a DHCP server on your router VM (OPNsense handles this natively) and consider a local DNS resolver like AdGuard Home or Pi-hole — or just use Unbound (built into OPNsense).
For each VLAN, configure:
- DHCP range: e.g., 10.10.20.100–10.10.20.200 (leave low IPs for static assignments)
- DHCP reservations: bind MAC addresses to fixed IPs for servers
- DNS override: add local hostnames so
dev-web-01.home.labresolves without a public DNS entry
With this in place, you can drop the static IP from the provisioner script and let DHCP assign addresses — you'll still get the right IP via your DNS reservation.
Step 8: Secure the Management Plane
A private cloud is only as good as its management security. A few critical hardening steps:
Restrict Proxmox Web UI to Management VLAN
Edit /etc/default/pveproxy to bind only to the management interface:
LISTEN_IP="10.10.10.2"
Restart the proxy:
systemctl restart pveproxy
Now the Proxmox UI is unreachable from Dev, Prod, and IoT VLANs.
Use the Proxmox Firewall
Enable the Proxmox datacenter firewall under Datacenter → Firewall → Options and add rules at the datacenter level:
- Allow TCP 8006 (web UI) from 10.10.10.0/24 only
- Allow TCP 22 (SSH) from 10.10.10.0/24 only
- Drop everything else inbound
Two-Factor Authentication
Enable TOTP under Datacenter → Permissions → Two Factor. Force 2FA for all admin accounts. This is non-negotiable for anything exposed beyond localhost.
Step 9: Monitor Your Private Cloud
Once VMs are running across multiple VLANs, visibility matters. The classic stack is Prometheus + Grafana + node_exporter:
- Deploy a monitoring VM on the management VLAN (VLAN 10)
- Install
node_exporteron each VM via cloud-init user-data - Configure Prometheus to scrape across VLANs (the router allows monitoring VLAN → all VLANs on port 9100)
- Build Grafana dashboards for CPU, memory, disk I/O, and network per VLAN
Alternatively, Proxmox has built-in integration with InfluxDB under Datacenter → Metric Server — useful for a quick start without a full Prometheus stack.
Putting It All Together
Here's what your private cloud looks like once it's running:
- Proxmox host sits on VLAN 10 (management), visible only to trusted admin devices
- Router VM (OPNsense) has a leg on every VLAN, enforces firewall rules between them
- Dev VMs spin up on VLAN 20, isolated from everything else, can reach the internet via NAT
- Prod VMs sit on VLAN 30, internet-accessible only for specific ports via port forwards
- IoT devices are quarantined on VLAN 40, blocked from reaching any VM
- New VMs deploy in under 60 seconds from the cloud-init template
This architecture scales. Add more Proxmox nodes to the cluster and VMs live-migrate between them while keeping their VLAN assignments. Add more templates for different OS flavors. Add Ansible or Terraform for even more automation.
Common Issues and Fixes
Cloud-init doesn't apply settings on first boot
Make sure the VM has an IDE2 drive set to cloudinit. Without it, cloud-init has nowhere to read its configuration from.
VMs on the same VLAN can't communicate Check that both VMs have the same VLAN tag set on their network interfaces. A typo (tag 20 vs tag 02) will silently break communication.
SDN changes don't take effect
After any SDN change in the web UI, you must click Apply — changes are staged and don't auto-apply. Check /etc/network/interfaces.d/sdn to confirm the config was written.
Router VM loses internet after Proxmox reboot Make sure the router VM is set to Start at boot and has a Start/Shutdown order lower than your other VMs. If VMs boot before the router, they'll fail to get DHCP leases.
Conclusion
Building a private cloud with Proxmox isn't about mimicking AWS for its own sake — it's about having repeatable, isolated, on-demand infrastructure that you fully control. With VLAN segmentation enforcing network boundaries, SDN making multi-VLAN management tractable, and cloud-init templates turning provisioning into a one-liner, you end up with a homelab that behaves like a real cloud platform.
The initial setup takes a few hours, but once it's running, the operational overhead is minimal. You'll spend less time manually configuring VMs and more time actually using them — which is exactly what good infrastructure should feel like.
Start with two or three VLANs if the full layout feels overwhelming. The architecture is modular: add VLANs, templates, and automation incrementally as your comfort grows.