6 Must-Have LXC Containers for Your Proxmox Homelab

Six LXC containers that cover the majority of Proxmox homelab infrastructure needs, with exact RAM specs, pct commands, and real-world gotchas for each.

Proxmox Pulse Proxmox Pulse
11 min read
lxc homelab pihole jellyfin nginx-proxy-manager
Six glowing translucent virtual containers arranged in a honeycomb pattern in a dark server room.

If you're running Proxmox VE 9.1 and wondering what to put inside it, these six LXC containers cover the majority of homelab infrastructure needs: DNS filtering, SSL reverse proxying, password management, uptime monitoring, media serving, and identity management. Each runs on under 1 GB RAM, boots in seconds, and coexists on a single 8 GB node with plenty of headroom. By the end of this guide you'll have a homelab services stack that punches well above its weight.

Key Takeaways

  • Lightweight: Each container uses 128–512 MB RAM; all six together idle at around 1 GB combined.
  • Unprivileged by default: All six run as unprivileged LXC containers — Jellyfin needs two extra cgroup lines for hardware transcoding, nothing more.
  • Order matters: Deploy Pi-hole first so every subsequent container can use it for local DNS immediately.
  • Docker optional: Four of the six run as native systemd services; only NPM and Authentik benefit from Docker Compose.
  • Minimum hardware: A node with 8 GB RAM and 60 GB SSD handles the full stack comfortably with room for Proxmox backups.

Why LXC Instead of Full VMs for These Workloads

LXC containers share the host kernel, which means sub-second boot times and near-zero overhead versus a full KVM VM. A Pi-hole VM running Debian idles at around 350 MB RAM just for the OS layer. The same workload in an LXC container uses 70 MB. For services that spend most of their life waiting — DNS resolvers, uptime checkers, password vaults — that gap is the difference between fitting six services on a 4 GB node or needing 16 GB.

The tradeoff: LXC containers share the host kernel, so a kernel-level exploit could theoretically affect other containers on the same host. For most homelab threat models this is acceptable. If you want the full isolation picture, the guide on running Docker inside LXC containers on Proxmox covers exactly where the isolation boundary sits and when a full VM is warranted instead.

How to Create LXC Containers from the CLI

All six containers follow the same creation pattern. Pull the Debian 12 template first:

pveam update
pveam download local debian-12-standard_12.7-1_amd64.tar.zst

Then create and start with pct:

pct create 200 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  --hostname pihole \
  --memory 256 \
  --cores 1 \
  --net0 name=eth0,bridge=vmbr0,ip=192.168.1.10/24,gw=192.168.1.1 \
  --storage local-lvm \
  --rootfs local-lvm:4 \
  --unprivileged 1 \
  --start 1

Adjust --memory, --rootfs, the CT ID, and the IP for each service. The specs below are from a production homelab running Proxmox VE 9.1 — not theoretical minimums.

Container 1: Pi-hole — Network-Wide DNS Filtering

CT ID: 200 | RAM: 256 MB | Disk: 4 GB | IP: 192.168.1.10

Pi-hole on Debian 12 in an unprivileged LXC is the foundation of the entire stack. Every other container and LAN client points to it for DNS, so deploy this one first.

pct exec 200 -- bash -c "apt update && apt install -y curl"
pct exec 200 -- bash -c "curl -sSL https://install.pi-hole.net | bash"

After the installer exits, set the web admin password:

pct exec 200 -- pihole -a -p yourpassword

Gotcha: Pi-hole's installer configures eth0 as the listening interface and complains if the container's DNS already resolves to localhost. Before running the installer, check /etc/resolv.conf inside the container and temporarily set an upstream like 1.1.1.1. Switch it back to 127.0.0.1 after Pi-hole is running. The admin UI lands at http://192.168.1.10/admin.

Container 2: Nginx Proxy Manager — SSL Reverse Proxy

CT ID: 201 | RAM: 512 MB | Disk: 8 GB | IP: 192.168.1.11 (needs ports 80 and 443)

Nginx Proxy Manager gives you a web GUI for SSL termination, Let's Encrypt auto-renewal, and subdomain routing to internal services. This is the one container in the list where Docker Compose pays off — the NPM image is significantly easier to update than a manual nginx + certbot setup. For a broader look at managing Docker workloads on Proxmox, the guide on managing Docker on Proxmox with Portainer and Dockge covers the tooling that complements NPM.

Inside the container:

apt update && apt install -y docker.io docker-compose-plugin
mkdir -p /opt/npm && cd /opt/npm

Create /opt/npm/docker-compose.yml:

services:
  npm:
    image: jc21/nginx-proxy-manager:2.12.1
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "81:81"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
docker compose up -d

Default credentials: admin@example.com / changeme. Change both immediately on first login at http://192.168.1.11:81.

Gotcha: Pi-hole and NPM must be on different static IPs. They don't share ports, but assigning both to 192.168.1.10 is a common mistake that causes maddening DNS resolution failures. Give NPM its own IP from the start.

Container 3: Vaultwarden — Self-Hosted Password Manager

CT ID: 202 | RAM: 256 MB | Disk: 4 GB | IP: 192.168.1.12

Vaultwarden is a Bitwarden-compatible server written in Rust. It handles the full Bitwarden client API — browser extensions, mobile apps, the desktop client — at under 25 MB resident memory at idle. The official Bitwarden server requires 2 GB RAM minimum; Vaultwarden replaces it entirely for personal or small-team use.

apt update && apt install -y docker.io

Generate an Argon2 admin token before starting the container:

docker run --rm -it vaultwarden/server:1.32.0 /vaultwarden hash --preset owasp

Copy the $argon2id$... output, then start the container:

docker run -d \
  --name vaultwarden \
  --restart unless-stopped \
  -e ADMIN_TOKEN='$argon2id$v=19$m=65540,t=3,p=4$YOURTOKEN' \
  -v /opt/vaultwarden/data:/data \
  -p 8080:80 \
  vaultwarden/server:1.32.0

Gotcha: Bitwarden clients require HTTPS. Vaultwarden over plain HTTP works only on localhost — mobile app syncs fail silently over HTTP on a LAN IP without any useful error message. You must proxy it through Nginx Proxy Manager with a valid Let's Encrypt cert before pointing any clients at it. Add a proxy host in NPM for vault.yourdomain.com192.168.1.12:8080 before importing any passwords.

Timing: From container creation to first successful sync with the Bitwarden browser extension takes under 90 seconds once DNS and SSL are configured.

Container 4: Uptime Kuma — Service Monitoring Dashboard

CT ID: 203 | RAM: 256 MB | Disk: 4 GB | IP: 192.168.1.13

Uptime Kuma monitors HTTP endpoints, TCP ports, DNS records, and ping targets, then alerts you via Telegram, Discord, SMTP, or webhooks when something goes down. It also generates a public status page — useful if you're running services for family members or a small team.

apt update && apt install -y docker.io
docker run -d \
  --name uptime-kuma \
  --restart unless-stopped \
  -v /opt/uptime-kuma:/app/data \
  -p 3001:3001 \
  louislam/uptime-kuma:1.23.16

The web UI is at http://192.168.1.13:3001. Add monitors for Pi-hole, NPM, Vaultwarden, and anything else you've deployed — the whole point of this container is to catch failures before your users do.

Gotcha: Uptime Kuma stores everything in SQLite. If the 4 GB rootfs fills up — which happens if you enable verbose logging and forget about it — the database stops writing and you lose monitoring history with no obvious error in the UI. Check disk usage monthly with df -h inside the container and make sure Proxmox has a backup job scheduled for this CT.

Container 5: Jellyfin — Self-Hosted Media Server

CT ID: 204 | RAM: 1024 MB | Disk: 8 GB root + media bind mount | IP: 192.168.1.14

Jellyfin is the only container in this list that needs more than 512 MB RAM — library scanning on large collections briefly spikes to 1.2 GB. The rootfs stays lean at 8 GB because the actual media lives on the host or NAS, mounted into the container as a bind mount.

Add the bind mount in /etc/pve/lxc/204.conf before starting the container:

mp0: /mnt/nas/media,mp=/media

Then install Jellyfin 10.10.x (current stable as of April 2026):

curl https://repo.jellyfin.org/install-debuntu.sh | bash
systemctl enable --now jellyfin

For Intel Quick Sync hardware transcoding in an unprivileged LXC, add these lines to /etc/pve/lxc/204.conf:

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir

Then add the jellyfin user to the render and video groups inside the container and restart:

usermod -aG render,video jellyfin
systemctl restart jellyfin

Gotcha: In an unprivileged LXC, the jellyfin user (UID 999 inside the container) maps to UID 100999 on the host. Your bind-mounted media directory needs to be readable by UID 100999. Fix it with chown -R 100999:100999 /mnt/nas/media on the host, or use ACLs if the mount is shared with other services.

Container 6: Authentik — Self-Hosted Identity Provider

CT ID: 205 | RAM: 1024 MB | Disk: 10 GB | IP: 192.168.1.15

Authentik is a self-hosted identity provider that adds SSO, OAuth2, LDAP, and SAML to your homelab. Once running, Nginx Proxy Manager can forward authentication to Authentik before proxying any service — meaning Vaultwarden, Jellyfin, and Uptime Kuma all sit behind a single login page without modifying those applications at all.

Authentik requires PostgreSQL and Redis, making Docker Compose the only sane choice here:

apt update && apt install -y docker.io docker-compose-plugin
mkdir /opt/authentik && cd /opt/authentik

Download the official compose file from the Authentik documentation, then generate secrets:

echo "PG_PASS=$(openssl rand -base64 36 | tr -d '=+/')" >> .env
echo "AUTHENTIK_SECRET_KEY=$(openssl rand -base64 60 | tr -d '=+/')" >> .env
echo "AUTHENTIK_ERROR_REPORTING__ENABLED=false" >> .env
docker compose pull && docker compose up -d

First startup takes 2–3 minutes while Authentik runs database migrations. Complete setup at http://192.168.1.15:9000/if/flow/initial-setup/.

Gotcha: The default Authentik compose file uses latest image tags. Pin to a specific release (e.g., ghcr.io/goauthentik/server:2024.12.3) before your first pull. Authentik occasionally ships breaking API changes between minor versions, and an unattended docker compose pull on the wrong day will break SSO for every proxied service simultaneously.

Worth the complexity? Only if you have five or more services to protect. For a two-container setup, HTTP Basic Auth through NPM is sufficient. Authentik pays off when you want audit logs, TOTP enforcement, and a single logout that propagates across all services at once.

Resource Planning: Running All Six on One Node

Container RAM Allocated RAM at Idle vCPUs Disk
Pi-hole 256 MB 70 MB 1 4 GB
Nginx Proxy Manager 512 MB 180 MB 1 8 GB
Vaultwarden 256 MB 25 MB 1 4 GB
Uptime Kuma 256 MB 90 MB 1 4 GB
Jellyfin 1024 MB 220 MB 2 8 GB + media
Authentik 1024 MB 450 MB 2 10 GB
Total 3328 MB ~1035 MB 8 38 GB

An 8 GB node handles the full stack with headroom. On a 4 GB node, drop Authentik — it's the heaviest container and the least essential for a basic homelab. For guidance on node selection, storage layout, and networking to support this kind of infrastructure, the guide on building a private cloud at home with Proxmox VE covers the hardware decisions that set the foundation.

Security Hardening for the Container Stack

All six containers handle sensitive data. A few non-negotiable steps before you consider this stack production-ready:

  • Restrict admin ports: Lock ports 81 (NPM admin), 3001 (Uptime Kuma), and 9000 (Authentik) to your LAN subnet using Proxmox firewall rules. The Proxmox firewall and SSH hardening guide covers datacenter-level rules that apply at the container level without touching iptables manually.
  • Schedule backups: All six containers store persistent state on disk. Configure a Proxmox backup job for each CT in Datacenter → Backup, targeting PBS or an NFS share. Weekly retention of two backups is the minimum viable safety net.
  • Pin versions: Vaultwarden and Authentik are security-critical. Never run latest tags — pin to a specific version and update deliberately after reading the changelog.
  • Enable TOTP: Pi-hole, NPM, Uptime Kuma, and Authentik all support TOTP. Enable it on each admin account before exposing any service through NPM to the internet.

Conclusion

These six LXC containers — Pi-hole, Nginx Proxy Manager, Vaultwarden, Uptime Kuma, Jellyfin, and Authentik — give you a complete homelab services layer that runs comfortably on a single 8 GB Proxmox VE 9.1 node. Deploy them in order: DNS first, reverse proxy second, then everything else behind it. Once Authentik is in place, the natural next step is wiring its forward-auth middleware into each NPM proxy host — a 10-minute configuration that replaces per-service login prompts with a single SSO portal covering your entire homelab.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →