Proxmox VE 9.1 OCI LXC: Running Any Container Image

Proxmox VE 9.1 introduces OCI-based LXC containers, letting you pull and run any Docker Hub image as a native LXC—no daemon, no Docker runtime needed.

Proxmox Pulse Proxmox Pulse
7 min read
OCI containers LXC Proxmox VE 9.1 containerization Docker

Proxmox VE 9.1 shipped with one of the most requested features in the homelab community: native OCI-based LXC containers. You can now pull any image from Docker Hub, GitHub Container Registry, or a private registry and run it as a lightweight LXC container—no Docker daemon, no containerd, no Kubernetes. Just a fast, low-overhead container managed directly by Proxmox.

What Is OCI LXC in Proxmox VE 9.1?

OCI stands for Open Container Initiative, the specification that defines how container images are packaged and distributed. When you pull a Docker image, you're pulling an OCI-compliant image. Until Proxmox VE 9.1, the only way to run those images on Proxmox was inside a Docker installation—usually running inside an LXC container itself (the Docker-in-LXC pattern).

Proxmox VE 9.1 changes that by letting pct (the Proxmox container toolkit) consume OCI images directly as the rootfs source. The image is unpacked into the container's filesystem, and the container runs as a native LXC with all the usual Proxmox management features: web UI visibility, resource limits, snapshots, backups, and the Proxmox firewall.

This is a fundamentally different approach from Docker. There's no image layer caching daemon running on your node—just a container process managed by the Linux kernel's LXC subsystem.

Why This Matters for Homelabs

Running Docker inside LXC has always been a workaround. The typical setup involves creating a privileged LXC container, installing Docker inside it, and then running your containers inside that. This adds overhead, complicates networking, and means you're managing two layers of container tooling.

With OCI LXC, you can skip that entirely for most single-service containers. A PostgreSQL database, an Nginx reverse proxy, or a Vaultwarden instance can each run as a direct LXC container with:

  • Resource limits enforced at the hypervisor level
  • Snapshots and backups via Proxmox Backup Server
  • Network configuration through Proxmox's bridge and VLAN system
  • A proper entry in the Proxmox web UI

Prerequisites

Before you start, make sure you're running Proxmox VE 9.1 or later. Check your version with:

pveversion

You'll need internet access from your Proxmox node to pull images, or a local registry mirror if you're running air-gapped. Your node also needs sufficient storage space—most images unpack to 200MB–2GB depending on the base image.

No additional packages are required. The OCI support is built into the proxmox-ve package as of 9.1.

Creating an OCI LXC Container

Using the Web UI

In the Proxmox web UI, click Create CT as you normally would. In the Template field, instead of selecting a local template, type the OCI image reference directly. The format is:

oci://docker.io/library/nginx:latest

The web UI will pull the manifest and begin downloading the image layers when you click Finish.

Using the Command Line

The pct create command accepts OCI references as the storage source:

pct create 200 oci://docker.io/library/nginx:latest \
  --rootfs local-lvm:8 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --hostname nginx-proxy \
  --cores 2 \
  --memory 512 \
  --ostype unmanaged \
  --unprivileged 1

Key flags here:

  • oci://docker.io/library/nginx:latest — the OCI image reference
  • --ostype unmanaged — tells Proxmox not to apply OS-specific guest configurations
  • --unprivileged 1 — run as unprivileged where possible (recommended)

For images on GitHub Container Registry:

pct create 201 oci://ghcr.io/paperless-ngx/paperless-ngx:latest \
  --rootfs local-lvm:10 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --hostname paperless \
  --cores 4 \
  --memory 1024 \
  --ostype unmanaged

Authenticated Registries

For private registries or images that require authentication, pass credentials via environment before running pct:

# Set credentials for Docker Hub (avoids pull rate limiting)
export OCI_REGISTRY_AUTH='{"docker.io": {"username": "myuser", "password": "mytoken"}}'

pct create 202 oci://docker.io/myorg/myapp:v2.1 \
  --rootfs local-lvm:5 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --hostname myapp \
  --cores 2 \
  --memory 512

Configuring the Container

OCI containers run the single process defined in the image's CMD or ENTRYPOINT by default. Proxmox respects this automatically. After creating the container, inspect the generated config:

cat /etc/pve/lxc/200.conf

To override the default startup command, add to the config file:

lxc.init.cmd: /usr/local/bin/nginx -g "daemon off;"

Environment Variables

Pass environment variables by editing the container config after creation:

cat >> /etc/pve/lxc/201.conf << 'EOF'
lxc.environment: PAPERLESS_URL=https://docs.example.com
lxc.environment: PAPERLESS_SECRET_KEY=your-secret-key-here
lxc.environment: PAPERLESS_DBHOST=192.168.10.20
EOF

Persistent Storage with Bind Mounts

OCI containers write to their rootfs by default, which is lost when you recreate the container. For persistence, use bind mounts to host directories:

# Create a data directory on the host
mkdir -p /mnt/data/nginx/conf.d

# Add a bind mount via pct
pct set 200 --mp0 /mnt/data/nginx,mp=/etc/nginx/conf.d

Or add directly to the config file:

mp0: /mnt/data/nginx,mp=/etc/nginx/conf.d
mp1: /mnt/data/nginx/html,mp=/usr/share/nginx/html

This approach separates data from the container rootfs entirely. Upgrades become trivial: stop, destroy, recreate from the updated image, reattach the same mount points, start.

Networking: Ports and Service Discovery

OCI containers don't use Docker-style port mapping. The container gets a network interface directly on your Proxmox bridge, so you access services on the container's own IP:

pct set 200 --net0 name=eth0,bridge=vmbr0,ip=192.168.10.50/24,gw=192.168.10.1

Configure the application to listen on its desired port via environment variables or bind-mounted config files. There's no --port 80:80 equivalent—the container is on the network as a first-class host. This actually simplifies reverse proxy setups considerably since you're just pointing Caddy or Nginx Proxy Manager at a real IP address.

OCI LXC vs Docker vs Traditional LXC

Feature OCI LXC Docker-in-LXC Traditional LXC
Web UI visibility Yes Yes (LXC only) Yes
Proxmox backups Yes Yes Yes
Snapshots Yes Yes Yes
Layer caching No Yes N/A
Compose support No Yes No
Multi-container apps Manual docker-compose Manual
Resource overhead Minimal Low Minimal
Image ecosystem Full OCI registry Full Docker Hub Proxmox templates

The biggest limitation of OCI LXC is the lack of Docker Compose support. If your application is a multi-container stack (app + database + cache), Docker-in-LXC or a dedicated Docker VM is still the better choice. OCI LXC shines for single-service containers where Proxmox-native management is the priority.

Practical Example: Running Vaultwarden

Vaultwarden—the lightweight Bitwarden-compatible password manager—is a perfect OCI LXC candidate. It's a single binary with simple persistence needs:

# Create persistent data directory on the host
mkdir -p /mnt/data/vaultwarden

# Create the container
pct create 210 oci://docker.io/vaultwarden/server:latest \
  --rootfs local-lvm:2 \
  --net0 name=eth0,bridge=vmbr0,ip=192.168.10.60/24,gw=192.168.10.1 \
  --hostname vaultwarden \
  --cores 1 \
  --memory 256 \
  --ostype unmanaged \
  --unprivileged 1

# Attach persistent storage
pct set 210 --mp0 /mnt/data/vaultwarden,mp=/data

# Configure via environment variables
cat >> /etc/pve/lxc/210.conf << 'EOF'
lxc.environment: DOMAIN=https://vault.example.com
lxc.environment: WEBSOCKET_ENABLED=true
lxc.environment: SIGNUPS_ALLOWED=false
EOF

# Start it up
pct start 210

Vaultwarden will be listening at http://192.168.10.60:80 immediately after start, with all data persisted to /mnt/data/vaultwarden on the host.

Troubleshooting Common Issues

Container fails to start with "exec format error": The OCI image was built for a different CPU architecture—for example, an amd64-only image running on an ARM node. Check the image's available platforms on Docker Hub before pulling.

Container exits immediately after start: The image expects required environment variables or volume mounts that aren't configured yet. Use pct console 210 right after start, or check journalctl -u pve-container@210 on the host to capture the process output.

Network not reachable inside the container: For unprivileged containers, manually verify the network config was written correctly in /etc/pve/lxc/200.conf. You should see lxc.net.0.type: veth and lxc.net.0.link: vmbr0.

Image pull throttled by Docker Hub: Docker Hub rate-limits unauthenticated pulls to 100 per 6 hours per IP. Configure registry authentication as shown earlier, or mirror frequently used images to a local Harbor or Gitea registry.

Conclusion

OCI LXC in Proxmox VE 9.1 is a genuine quality-of-life upgrade for homelab administrators who want to run containerized workloads with minimal overhead. By bringing OCI image support directly into the LXC subsystem, Proxmox closes the gap between the convenience of the Docker ecosystem and the management advantages of native Proxmox containers. For single-service workloads—reverse proxies, password managers, monitoring agents, and similar applications—it's now often the simplest and most efficient deployment path available on your Proxmox host.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →