Proxmox LXC Templates: Build and Deploy Custom Containers
Learn how to create reusable custom LXC templates in Proxmox VE, from base container prep to rapid deployment across your homelab or cluster.
On this page
If you've ever provisioned the same container three times in a week — installing the same packages, tweaking the same configs, and wondering why you're doing this again — custom LXC templates are about to become your best friend. Proxmox VE makes it surprisingly straightforward to snapshot a configured container into a reusable template, and once you have a solid base image, spinning up a new instance takes under a minute.
This guide walks through the entire workflow: preparing a base container, stripping it down for template use, converting it, and deploying clones with optional cloud-init style customization using startup scripts.
Why Custom LXC Templates Beat Starting from Scratch
Proxmox ships with a curated list of official LXC templates from providers like Turnkey Linux, Ubuntu, Debian, and Alpine. These are great starting points, but they're generic. Every time you deploy one, you end up running the same post-install ritual:
- Update packages
- Install your standard tools (
htop,curl,vim,ufw, etc.) - Configure SSH authorized keys
- Set up your preferred shell environment
- Harden sshd_config
- Install language runtimes or daemons specific to your stack
With a custom template, all of that is baked in. You do it once, publish the template to your Proxmox storage, and every subsequent deployment inherits that baseline. This compounds nicely in a cluster — push the template to shared storage and every node can deploy from it instantly.
Prerequisites
Before you start, make sure you have:
- Proxmox VE 7.x or 8.x (steps apply to both)
- At least one storage pool configured for CT templates (check Datacenter → Storage → Content — enable
CT templates) - Root or sudo access to the Proxmox host
- A working LXC container to use as your base
If you haven't downloaded any base templates yet, grab one from the shell:
# List available templates
pveam update
pveam available --section system
Download Ubuntu 24.04 as a base
pveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zst
Step 1: Create and Configure Your Base Container
Start by creating a new container from a vanilla template. You can do this through the web UI or via CLI. CLI is faster once you know the syntax:
pct create 9000 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst \
--hostname base-template \
--cores 1 \
--memory 512 \
--rootfs local-lvm:8 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--unprivileged 1 \
--features nesting=1
CTID 9000 is a convention many homelab operators use for base/template containers — high enough to stay out of the way of regular IDs. Use whatever fits your numbering scheme.
Start the container and enter it:
pct start 9000
pct enter 9000
Install Your Standard Packages
This is where you bake in everything you want on every container by default. Here's a reasonable baseline for a Debian/Ubuntu container:
apt update && apt upgrade -y
apt install -y
curl
wget
git
vim
htop
net-tools
unzip
ca-certificates
gnupg
lsb-release
ufw
fail2ban
logrotate
sudo
Harden SSH
Edit /etc/ssh/sshd_config to disable password auth and root login:
sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
Restart SSH to apply:
systemctl restart ssh
Configure UFW Defaults
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw --force enable
Add Your SSH Public Key
mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo "your-public-key-here" >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
Configure fail2ban
The default fail2ban config is fine for most cases. Just make sure it's enabled:
systemctl enable fail2ban
systemctl start fail2ban
Step 2: Add a First-Boot Initialization Script
One of the most useful patterns for LXC templates is a first-boot script that handles machine-specific customization — setting a hostname, regenerating SSH host keys, and applying any per-container settings. This bridges the gap between a static template and dynamic cloud-init.
Create the script at /usr/local/bin/first-boot-init.sh:
#!/bin/bash
# First-boot initialization for cloned LXC containers
# Runs once on first boot, then disables itself
LOG=/var/log/first-boot-init.log
echo "[$(date)] First boot initialization starting..." >> $LOG
Regenerate SSH host keys
rm -f /etc/ssh/ssh_host_* dpkg-reconfigure openssh-server 2>> $LOG echo "[$(date)] SSH host keys regenerated" >> $LOG
Clear bash history
history -c truncate -s 0 /root/.bash_history echo "[$(date)] Bash history cleared" >> $LOG
Disable this service after first run
systemctl disable first-boot-init.service echo "[$(date)] First boot initialization complete" >> $LOG
Make it executable:
chmod +x /usr/local/bin/first-boot-init.sh
Create a systemd unit at /etc/systemd/system/first-boot-init.service:
[Unit]
Description=First Boot Initialization
After=network.target
ConditionPathExists=/usr/local/bin/first-boot-init.sh
[Service] Type=oneshot ExecStart=/usr/local/bin/first-boot-init.sh RemainAfterExit=yes
[Install] WantedBy=multi-user.target
Enable it so it runs on the first boot of each cloned container:
systemctl enable first-boot-init.service
Step 3: Clean Up Before Converting to Template
Before you freeze the container into a template, clean up anything that should not be shared across clones:
# Clear package cache
apt clean
apt autoremove -y
Remove machine-specific IDs
truncate -s 0 /etc/machine-id rm -f /var/lib/dbus/machine-id ln -sf /etc/machine-id /var/lib/dbus/machine-id
Clear logs
find /var/log -type f -exec truncate -s 0 {} ; journalctl --vacuum-time=1s
Clear shell history
history -c truncate -s 0 /root/.bash_history
Clear temporary files
rm -rf /tmp/* /var/tmp/*
Exit the container:
bash exit
Stop it:
pct stop 9000
Step 4: Convert the Container to a Template
This is a one-way operation — once you convert a container to a template, you can no longer start it directly. Make sure you're happy with the state of the container before proceeding.
pct template 9000
The web UI will show the container icon change to a template indicator. The underlying disk image is now stored as a read-only template, and the container entry will be grayed out in the resource tree.
Verify it appears in your template list:
pveam list local
You won't see it there — pveam lists downloaded upstream templates. Your custom template is tracked as a CT in template state. It's accessible for cloning directly from the resource tree or via pct clone.
Step 5: Deploy Clones from Your Template
Via the Web UI
Right-click the template container (9000) in the resource tree and select Clone. You'll be prompted for:
- Target node — pick any node in your cluster
- VM ID — assign a new CTID
- Mode — use Full Clone to get an independent copy (Linked Clone shares the base disk, useful for saving space but ties the clone to the template)
- Hostname — set the new container's hostname
- Storage — where to store the cloned disk
Via CLI
# Full clone to CTID 101
pct clone 9000 101 --full --hostname webserver-01 --storage local-lvm
Start it immediately
pct start 101
For bulk deployments, wrap this in a loop:
#!/bin/bash
# Deploy 5 containers from template 9000
START_ID=110 COUNT=5 BASE_HOSTNAME="app-node" STORAGE="local-lvm"
for i in $(seq 1 $COUNT); do CTID=$((START_ID + i - 1)) HOSTNAME="${BASE_HOSTNAME}-$(printf '%02d' $i)" echo "Deploying CTID $CTID as $HOSTNAME..." pct clone 9000 $CTID --full --hostname $HOSTNAME --storage $STORAGE pct start $CTID echo "CTID $CTID started" done
echo "Deployment complete"
Sharing Templates Across a Cluster
If you run a Proxmox cluster, you'll want your custom template accessible from all nodes without manually copying it. The cleanest approach is shared storage — an NFS or Ceph pool with CT templates enabled.
Option A: Export and Import
For non-shared storage, you can export the template disk and import it on other nodes:
# On the source node — dump the template to a tar archive
pct dump 9000 /tmp/ubuntu-24.04-custom-base.tar.zst
Copy to target node
scp /tmp/ubuntu-24.04-custom-base.tar.zst root@pve-node2:/tmp/
On the target node — import it as a template
pveam add local /tmp/ubuntu-24.04-custom-base.tar.zst
Wait — pveam add is for upstream template metadata, not custom exports. The correct approach is to place the .tar.zst archive in the template storage directory:
# On target node
mv /tmp/ubuntu-24.04-custom-base.tar.zst /var/lib/vz/template/cache/
It will then appear in the web UI under local → CT Templates.
Option B: Shared Storage
Add an NFS share or Ceph pool to all nodes with the CT templates content type enabled. Create your template on one node, store the result on shared storage, and all nodes can clone from it directly. This is the recommended approach for clusters with more than two nodes.
Maintaining and Updating Your Templates
Templates go stale — packages get outdated, security patches drop, your tooling preferences evolve. Here's a maintenance workflow that doesn't require rebuilding from scratch:
- Clone the template to a new working container:
pct clone 9000 9001 --full - Start and update the clone:
pct start 9001 && pct enter 9001 - Run updates and make changes inside the container
- Clean up (repeat the cleanup steps from Step 3)
- Stop the container:
pct stop 9001 - Convert to template:
pct template 9001 - Delete or archive the old template (9000) once you've verified the new one
Keeping version numbers in your CTID scheme (9001, 9002, etc.) makes it easy to track which template is current without losing the previous version until you're confident in the new one.
Practical Tips
- Use descriptive tags — In Proxmox 7.3+, you can tag containers. Tag your templates with
template,base, and the distro version (ubuntu-24.04) so they're easy to filter. - Document what's in each template — Add a
/etc/template-manifest.txtfile inside the container listing installed packages and configuration choices. Future-you will thank present-you. - Don't bake in secrets — API keys, passwords, and certificates should be injected post-deployment, not embedded in the template. Use the first-boot script or an external secrets manager.
- Keep templates minimal — The more you put in a template, the heavier each clone becomes and the more maintenance surface you create. Stick to the lowest common denominator across all use cases.
- Test before templating — Boot the container at least once after all your changes, verify SSH access, confirm the first-boot service runs correctly, and check that firewall rules apply as expected.
Conclusion
Custom LXC templates are one of those Proxmox features that seems like a small optimization until you've used it a few times — then you wonder how you ever managed without them. The upfront investment of an hour configuring a solid base container pays dividends every time you need a new container, especially when you're spinning up several at once or standardizing across a cluster.
The workflow is simple: configure, clean, convert, clone. Add a first-boot init script for machine-specific setup and you have a lightweight alternative to cloud-init that works entirely within the LXC ecosystem. From here, you can layer on more advanced automation — Ansible for post-deployment configuration, Terraform with the Proxmox provider for infrastructure-as-code deployments, or simple shell scripts called over SSH for bulk operations. The template is just the foundation.