Talos Linux on Proxmox: Immutable Kubernetes Nodes
Deploy immutable, API-driven Kubernetes nodes on Proxmox using Talos Linux. Full walkthrough covering VM setup, Cloud-Init, talosctl, and cluster bootstrapping.
On this page
If you've ever spent an afternoon SSH-ing into a Kubernetes node to debug a kubelet issue, only to realize you've created a beautiful snowflake that can never be safely reproduced, Talos Linux was built for you. Talos takes a radically different approach: there's no shell, no SSH, no package manager, and no imperative configuration. Every node is defined by a machine config file and managed entirely through an API. Combine that with Proxmox VE's VM management capabilities and you get a Kubernetes cluster you can tear down and rebuild in minutes with full confidence it'll be identical every time.
What Is Talos Linux?
Talos is a minimal, immutable Linux distribution purpose-built for Kubernetes. It strips the OS down to exactly what Kubernetes needs — a kernel, containerd, kubelet, and an API server called apid. That's it.
Why Talos Pairs Well with Proxmox
- Declarative everything: Machine configs are YAML files you version-control alongside your infrastructure code.
- No configuration drift: Nodes can't be imperatively modified, so your cluster stays reproducible.
- Fast provisioning: Talos boots from a disk image in seconds and is ready for
talosctl apply-configalmost immediately. - Cloud-Init compatible: Proxmox's Cloud-Init integration makes injecting the initial Talos config straightforward.
- API-driven:
talosctlmanages nodes the same waykubectlmanages pods — no bastions, no jump hosts.
The result is a cluster where every node is cattle, not a pet.
Prerequisites
Before you start, make sure you have:
- Proxmox VE 8.x or 9.x installed and updated
- At least 3 VMs worth of resources (1 control plane, 2 workers minimum)
talosctlinstalled on your workstationkubectlinstalled on your workstation- Network connectivity from your workstation to the Proxmox host and VM network
For a minimal cluster you'll want roughly:
- Control plane: 2 vCPU, 4 GB RAM, 20 GB disk
- Workers: 2 vCPU, 4 GB RAM, 40 GB disk
Step 1: Download the Talos Disk Image
Talos provides a ready-to-use nocloud disk image for bare-metal and virtualized environments. Head to the Talos releases page and grab the latest stable release. You want the metal-amd64.iso for an ISO-based install or nocloud-amd64.raw.xz for a raw disk image.
For Proxmox, the ISO approach is simpler since you can upload it directly to your storage.
# Download on your workstation
wget https://github.com/siderolabs/talos/releases/download/v1.9.4/metal-amd64.iso
Or download directly on the Proxmox host
wget -P /var/lib/vz/template/iso/
https://github.com/siderolabs/talos/releases/download/v1.9.4/metal-amd64.iso
If you're downloading locally, upload it to Proxmox via the web UI under Storage → ISO Images → Upload, or use scp:
scp metal-amd64.iso root@proxmox-host:/var/lib/vz/template/iso/
Step 2: Create Talos VMs in Proxmox
You'll create one VM for the control plane and two for workers. The process is the same for all three — only the machine config differs later.
Create the Control Plane VM
In the Proxmox web UI, click Create VM and configure:
- General: Name it
talos-cp-01, note the VM ID (e.g., 200) - OS: Select the Talos ISO you uploaded; Guest OS type: Linux, kernel 6.x
- System: BIOS: OVMF (UEFI); Machine: q35; add a TPM if you want measured boot
- Disks: VirtIO SCSI, 20 GB minimum, enable discard and SSD emulation
- CPU: 2 cores, type: host (for best performance)
- Memory: 4096 MB
- Network: VirtIO (virtio-net), attached to your VM bridge (e.g.,
vmbr0)
Repeat this for talos-worker-01 and talos-worker-02 with 40 GB disks and whatever CPU/RAM suits your workload.
Using qm CLI (Faster for Multiple VMs)
# Control plane
qm create 200 \
--name talos-cp-01 \
--memory 4096 \
--cores 2 \
--net0 virtio,bridge=vmbr0 \
--ostype l26 \
--bios ovmf \
--machine q35 \
--efidisk0 local-lvm:0,efitype=4m,pre-enrolled-keys=0 \
--scsi0 local-lvm:20,ssd=1,discard=on \
--ide2 local:iso/metal-amd64.iso,media=cdrom \
--boot order=ide2
Workers (repeat for 201, 202)
qm create 201
--name talos-worker-01
--memory 4096
--cores 2
--net0 virtio,bridge=vmbr0
--ostype l26
--bios ovmf
--machine q35
--efidisk0 local-lvm:0,efitype=4m,pre-enrolled-keys=0
--scsi0 local-lvm:40,ssd=1,discard=on
--ide2 local:iso/metal-amd64.iso,media=cdrom
--boot order=ide2
Step 3: Install talosctl
On your workstation (Linux/macOS):
curl -sL https://talos.dev/install | sh
Verify the install:
talosctl version --client
# TalosClient v1.9.4
Make sure the version matches your downloaded ISO to avoid API compatibility issues.
Step 4: Boot the VMs and Get IP Addresses
Start all three VMs. Talos will boot from the ISO and enter a maintenance mode — it won't install itself to disk until it receives a machine config.
In the Proxmox console for each VM, you'll see a screen showing the IP address Talos acquired via DHCP. Note these down:
Control plane: 192.168.1.100 Worker 01: 192.168.1.101 Worker 02: 192.168.1.102
Tip: For a production setup, assign static IPs or DHCP reservations before this step. The control plane IP will be embedded in your machine configs and changing it later is painful.
Step 5: Generate Talos Machine Configs
This is where the magic happens. talosctl gen config produces the YAML files that define your entire cluster.
talosctl gen config talos-proxmox-cluster https://192.168.1.100:6443 \
--output-dir ./talos-config
This creates four files:
talos-config/ ├── controlplane.yaml # Machine config for control plane nodes ├── worker.yaml # Machine config for worker nodes ├── talosconfig # Client config for talosctl └── secrets.yaml # Cluster secrets (keep this safe!)
Customize the Control Plane Config
Open controlplane.yaml and review the key sections. At minimum, verify the cluster.endpoint matches your control plane IP.
For Proxmox VMs, you'll also want to add the virtio disk device if it isn't auto-detected. Add this under machine.install:
machine:
install:
disk: /dev/vda
image: ghcr.io/siderolabs/installer:v1.9.4
bootloader: true
wipe: false
network:
hostname: talos-cp-01
For workers, edit worker.yaml similarly and set unique hostnames:
machine:
install:
disk: /dev/vda
image: ghcr.io/siderolabs/installer:v1.9.4
network:
hostname: talos-worker-01
Enable the KubeSpan or CNI (Optional)
If you want encrypted pod networking between nodes, Talos has built-in KubeSpan (WireGuard-based). Add this to controlplane.yaml:
machine:
network:
kubespan:
enabled: true
For a standard Flannel or Calico CNI, you'll apply it after cluster bootstrap via kubectl.
Step 6: Apply Machine Configs
With the VMs running in maintenance mode and your configs ready, apply them:
# Control plane
talosctl apply-config \
--nodes 192.168.1.100 \
--file ./talos-config/controlplane.yaml \
--insecure
Workers
talosctl apply-config
--nodes 192.168.1.101
--file ./talos-config/worker.yaml
--insecure
talosctl apply-config
--nodes 192.168.1.102
--file ./talos-config/worker.yaml
--insecure
The --insecure flag is needed on first apply because the node's TLS certificates aren't established yet. After initial configuration, all subsequent talosctl commands use mutual TLS via your talosconfig.
Talos will install itself to disk and reboot. Watch the console — the whole process takes about 2 minutes per node.
Step 7: Bootstrap the Cluster
Once the control plane VM has rebooted, bootstrap etcd on it. This is only done once — ever — on one control plane node:
export TALOSCONFIG=./talos-config/talosconfig
talosctl --nodes 192.168.1.100 bootstrap
Watch the bootstrap logs:
talosctl --nodes 192.168.1.100 dmesg --follow
You should see kubelet starting, etcd coming up, and the API server becoming available. This typically takes 3-5 minutes.
Step 8: Retrieve the kubeconfig
Once the API server is up:
talosctl --nodes 192.168.1.100 kubeconfig ./kubeconfig
export KUBECONFIG=./kubeconfig
kubectl get nodes
Expected output (workers join within a minute or two):
NAME STATUS ROLES AGE VERSION
talos-cp-01 Ready control-plane 5m v1.32.1
talos-worker-01 Ready
Your cluster is up. No SSH, no apt-get, no configuration drift.
Step 9: Install a CNI Plugin
Talos doesn't install a CNI by default — you choose. Flannel is the simplest option:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
For Calico with network policy support:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml
After the CNI is running, CoreDNS will start and your pods will be able to communicate across nodes.
Managing Nodes Declaratively
One of the biggest wins with Talos is that node changes go through talosctl apply-config, not imperative commands. Need to change the kubelet log level? Edit the YAML and apply it.
# Apply a config change without reboot (if change doesn't require it)
talosctl apply-config \
--nodes 192.168.1.100 \
--file ./talos-config/controlplane.yaml
Force a reboot after apply (for kernel-level changes)
talosctl apply-config
--nodes 192.168.1.100
--file ./talos-config/controlplane.yaml
--mode reboot
Upgrading Talos
Upgrading a node is a single command:
talosctl upgrade \
--nodes 192.168.1.101 \
--image ghcr.io/siderolabs/installer:v1.9.5
Talos handles the cordon, drain, upgrade, and uncordon sequence automatically. For control plane nodes, upgrade them one at a time to maintain quorum.
Upgrading Kubernetes
talosctl upgrade-k8s \
--nodes 192.168.1.100 \
--to 1.33.0
This orchestrates the full Kubernetes upgrade across all nodes in your cluster.
Scaling the Cluster
Adding a worker is the same as the initial setup — create a Proxmox VM, boot from the ISO, apply the worker config:
qm create 203 \
--name talos-worker-03 \
--memory 8192 \
--cores 4 \
--net0 virtio,bridge=vmbr0 \
--ostype l26 \
--bios ovmf \
--machine q35 \
--efidisk0 local-lvm:0,efitype=4m,pre-enrolled-keys=0 \
--scsi0 local-lvm:40,ssd=1,discard=on \
--ide2 local:iso/metal-amd64.iso,media=cdrom \
--boot order=ide2
qm start 203
After boot, get the IP from console, then:
talosctl apply-config
--nodes 192.168.1.103
--file ./talos-config/worker.yaml
--insecure
The new node joins automatically.
Useful talosctl Commands
A quick reference for day-to-day operations:
# Check node health
talosctl --nodes 192.168.1.100 health
View running services
talosctl --nodes 192.168.1.100 services
Stream kernel logs
talosctl --nodes 192.168.1.100 dmesg --follow
Get node info
talosctl --nodes 192.168.1.100 get members
Read a config value
talosctl --nodes 192.168.1.100 get mc v1alpha1
Reset a node (wipes disk — use carefully!)
talosctl --nodes 192.168.1.101 reset
Tips and Gotchas
- Save your
secrets.yaml: This file contains the CA and bootstrap tokens for your cluster. Lose it and you lose the ability to re-issue certificates or re-bootstrap. Store it in a password manager or vault. - Static IPs matter: Talos embeds the control plane IP in PKI. If it changes, you need to regenerate configs. Set DHCP reservations before you start.
- Q35 + OVMF: Talos works best with the q35 machine type and UEFI. Legacy BIOS is technically supported but adds friction.
- Disk detection: If Talos can't find
/dev/vda, check that your disk is VirtIO SCSI, not IDE or SATA. You can also override the disk inmachine.install.disk. - Firewall ports: Talos API uses port 50000 (TCP). Your workstation needs access to this port on each node. Kubernetes API is on 6443 (control plane only).
Conclusion
Talos Linux on Proxmox is one of the most satisfying homelab setups you can build. You get Kubernetes nodes that are genuinely immutable — no SSH access means no configuration drift, no accidental manual changes, and no "I wonder what I did to that node six months ago" moments. Every node is defined by a version-controlled YAML file, every change goes through talosctl, and scaling is as simple as creating a new VM and applying a config.
The learning curve is steeper than kubeadm or k3s, mostly because you have to rewire the instinct to SSH in and fix things. Once you accept that the machine config is the source of truth, everything else clicks into place. For homelab users running multiple Kubernetes clusters on Proxmox, Talos + declarative machine configs is the cleanest path to reproducible infrastructure — and it's a genuine taste of how production-grade teams manage Kubernetes at scale.