Managing Docker on Proxmox with Portainer and Dockge
Learn how to run Portainer and Dockge in Proxmox LXC containers for a full web-based Docker GUI—covering installation, reverse proxy, and multi-host management.
On this page
Running Docker containers on Proxmox without any GUI is fine—until you're managing a dozen stacks across two nodes at midnight. Portainer and Dockge solve this in very different ways, and both run beautifully inside lightweight Proxmox LXC containers. This guide walks you through deploying each tool in its own dedicated container, wiring them together behind a reverse proxy, and using Portainer's agent to manage Docker hosts across your entire homelab from a single browser tab.
Why Separate LXC Containers for Each Tool
A common instinct is to cram everything into one "Docker LXC" and call it a day. That works until you want to update Portainer without touching your stacks, or you want Dockge on one node without giving it visibility into another node's containers.
Dedicated LXC containers give you clean isolation: each tool has its own resource limits, its own network configuration, and its own upgrade lifecycle. An LXC container on Proxmox consumes roughly 50–150 MB of RAM at idle, so the overhead is trivial compared to the operational clarity you gain.
The architecture we're building looks like this:
- portainer-lxc — Hosts the Portainer CE server (the central management UI)
- dockge-lxc — Hosts Dockge (compose-stack-focused UI)
- docker-lxc-1, docker-lxc-2, etc. — Your actual Docker workload containers with Portainer Agent installed
- A reverse proxy (Nginx Proxy Manager or Caddy) routes
portainer.yourdomain.localanddockge.yourdomain.localto the right containers
Creating the Portainer LXC Container
Start from a recent Debian 12 or Ubuntu 22.04 LXC template. On the Proxmox node shell, download the template if you don't have it:
pveam update
pveam download local debian-12-standard_12.7-1_amd64.tar.zst
Create the container via the UI or CLI. The key settings:
- Memory: 512 MB minimum, 1 GB recommended
- Disk: 8 GB (Portainer's data volume is small, but give it room)
- CPU: 1–2 cores
- Network: Static IP on your management VLAN, e.g.
192.168.10.20/24 - Unprivileged: Yes — Portainer's server container does not need host-level privileges
Using the CLI:
pct create 200 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
--hostname portainer-lxc \
--memory 1024 \
--cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=192.168.10.20/24,gw=192.168.10.1 \
--storage local-lvm \
--rootfs local-lvm:8 \
--unprivileged 1 \
--start 1
Install Docker in the Portainer LXC
Shell into the container and install Docker:
pct enter 200
apt update && apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update && apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
systemctl enable --now docker
Deploy Portainer CE
Portainer stores its data in a named volume. Deploy it with a single docker run:
docker volume create portainer_data
docker run -d
--name portainer
--restart=always
-p 8000:8000
-p 9443:9443
-v /var/run/docker.sock:/var/run/docker.sock
-v portainer_data:/data
portainer/portainer-ce:latest
Port 9443 is the HTTPS UI. Port 8000 is used by Portainer Agent connections from remote hosts. Access the UI at https://192.168.10.20:9443 and complete the initial admin setup.
Tip: The first-run setup screen times out after a few minutes. If you see a timeout error, restart the container with
docker restart portainerand try again.
Creating the Dockge LXC Container
Dockge is a newer, compose-centric alternative built by the same developer as Uptime Kuma. It's excellent for homelab users who think in docker-compose.yml files rather than individual containers. It does one thing well: manage compose stacks through a clean, reactive UI.
Create a second LXC container for Dockge:
pct create 201 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
--hostname dockge-lxc \
--memory 512 \
--cores 1 \
--net0 name=eth0,bridge=vmbr0,ip=192.168.10.21/24,gw=192.168.10.1 \
--storage local-lvm \
--rootfs local-lvm:8 \
--unprivileged 1 \
--start 1
Install Docker the same way as above (repeat the Docker CE install steps inside this container), then deploy Dockge:
mkdir -p /opt/stacks /opt/dockge
cd /opt/dockge
curl https://dockge.kuma.pet/compose.yaml --output compose.yaml
docker compose up -d
Dockge will be available at http://192.168.10.21:5001. The /opt/stacks directory is where Dockge stores each compose stack as a subdirectory—treat it like source of truth.
Why Dockge Alongside Portainer
Portainer excels at visibility: containers across multiple hosts, image management, network inspection, and exec access. Dockge excels at compose workflow: writing, editing, and redeploying stacks feel more like editing a file than clicking through a GUI. Many homelabbers run both—Portainer for monitoring and remote access, Dockge for day-to-day stack management on the local Docker host.
Installing Portainer Agent on Docker Hosts
For every Docker LXC container you want Portainer to manage remotely, install the Portainer Agent. Shell into each Docker workload container and run:
docker run -d \
--name portainer_agent \
--restart=always \
-p 9001:9001 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:latest
Back in the Portainer UI:
- Go to Environments → Add environment
- Select Docker Standalone → Agent
- Enter the container's IP and port
9001, e.g.192.168.10.30:9001 - Name it something descriptive like
docker-workloads-01
Repeat for each Docker LXC host. You'll end up with a single Portainer instance giving you a unified view across your entire lab.
Reverse Proxy Setup with Nginx Proxy Manager
Accessing tools by IP and port gets old fast. A reverse proxy with proper hostnames (and optional SSL) makes the workflow much cleaner.
If you're already running Nginx Proxy Manager (NPM) in your lab, adding these two hosts is straightforward. In the NPM UI, create two Proxy Hosts:
Portainer:
- Domain:
portainer.home.lab - Forward hostname:
192.168.10.20 - Forward port:
9443 - Enable Websockets Support
- Enable SSL if you have a wildcard cert
Dockge:
- Domain:
dockge.home.lab - Forward hostname:
192.168.10.21 - Forward port:
5001 - Enable Websockets Support (Dockge uses WebSockets for live terminal output)
Add both hostnames to your local DNS (Pi-hole, AdGuard Home, or your router's DNS override). Both tools will now be accessible at clean URLs with no port numbers.
Caddy Alternative
If you prefer Caddy, add this to your Caddyfile:
caddy portainer.home.lab { reverse_proxy https://192.168.10.20:9443 { transport http { tls_insecure_skip_verify } } }
dockge.home.lab { reverse_proxy 192.168.10.21:5001 }
The tls_insecure_skip_verify is needed because Portainer uses a self-signed cert by default. You can eliminate this by providing your own cert to Portainer at startup.
Managing Compose Stacks in Dockge
Dockge's main strength is its compose stack editor. To deploy a new stack:
- Click + (New Stack) in the Dockge UI
- Give the stack a name—this becomes the subdirectory under
/opt/stacks/ - Paste or write your
docker-compose.yml - Click Deploy
Dockge stores each stack as a real file on disk:
/opt/stacks/ ├── homer/ │ └── compose.yaml ├── vaultwarden/ │ └── compose.yaml └── uptime-kuma/ └── compose.yaml
This means you can also manage stacks by editing the files directly via SSH and clicking Sync in the Dockge UI. Git-backing the /opt/stacks/ directory for version control is a popular pattern.
Example: Deploying Vaultwarden via Dockge
In the Dockge stack editor, paste:
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
volumes:
- ./vw-data:/data
environment:
DOMAIN: "https://vault.home.lab"
SIGNUPS_ALLOWED: "false"
ports:
- "8080:80"
Click Deploy. Dockge shows live log output during the pull and startup. Once running, you'll see the container status, uptime, and a direct link to its logs—all without leaving the UI.
Keeping Containers Updated
Portainer Watchtower Integration
Portainer has a built-in image update check under Images. For automated updates, deploy Watchtower into the same LXC as your workloads:
docker run -d \
--name watchtower \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--schedule "0 4 * * *" \
--cleanup
This runs nightly at 4 AM, updates containers to their latest image tag, and removes the old images.
Updating Portainer Itself
docker stop portainer
docker rm portainer
docker pull portainer/portainer-ce:latest
# Re-run the original docker run command
Because Portainer's data lives in the named volume portainer_data, your environments, users, and settings survive the update.
Updating Dockge
cd /opt/dockge
docker compose pull
docker compose up -d
Dockge's compose file handles the image tag, so pulling and restarting is all it takes.
Resource Limits and LXC Tuning
By default, Proxmox LXC containers can consume as much CPU as the host allows. Set sensible limits so one runaway container doesn't starve your others:
# Limit Portainer LXC to 2 cores and 1 GB RAM
pct set 200 --cores 2 --memory 1024 --swap 512
Limit Dockge LXC to 1 core and 512 MB RAM
pct set 201 --cores 1 --memory 512 --swap 256
For production-like environments, also set CPU units (shares) to prioritize critical containers:
pct set 200 --cpuunits 1024 # Higher priority
pct set 201 --cpuunits 512 # Lower priority
Portainer vs Dockge: When to Use Each
Here's a quick breakdown to help you decide which tool to reach for:
| Use Case | Portainer | Dockge |
|---|---|---|
| Multi-host container overview | ✅ | ❌ |
| Compose stack management | ✅ (decent) | ✅ (excellent) |
| Container exec / terminal | ✅ | ✅ |
| Live log streaming | ✅ | ✅ |
| Image management | ✅ | ❌ |
| Network / volume inspection | ✅ | ❌ |
| Simple, fast UI | ❌ (complex) | ✅ |
| File-based stack storage | ❌ | ✅ |
The short answer: use Portainer when you need centralized visibility across many Docker hosts. Use Dockge when you want a fast, compose-native workflow on a single host. Running both costs about 700 MB RAM total—well worth it for the ergonomics.
Troubleshooting Common Issues
Portainer Agent Connection Refused
If Portainer can't reach a remote agent, check the firewall rules on the workload LXC. The agent listens on port 9001:
# On the workload LXC, verify the agent is listening
ss -tlnp | grep 9001
On the Portainer LXC, test connectivity
curl -v http://192.168.10.30:9001
Also verify Proxmox firewall isn't blocking inter-LXC traffic on your bridge. In the Proxmox firewall, check that the LXC containers are in the same security group or that the relevant ports are open.
Dockge WebSocket Disconnects Behind Reverse Proxy
Dockge's terminal and log views require WebSocket connections. If you see repeated disconnects through Nginx Proxy Manager, ensure the proxy host has Websockets Support enabled. If using Nginx manually, add:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
Docker Socket Permission Errors in Unprivileged LXC
If Docker complains about socket permissions in an unprivileged LXC, add the following to the container's config on the Proxmox host:
# In /etc/pve/lxc/200.conf
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
Restart the container after editing. This relaxes AppArmor restrictions that can interfere with Docker's cgroup management in nested environments.
Conclusion
Deploying Portainer and Dockge in dedicated Proxmox LXC containers gives you a clean, maintainable Docker management layer without the overhead of full VMs. Portainer handles the big picture—multiple hosts, image inspection, environment-wide visibility—while Dockge handles the day-to-day compose workflow with a fast, file-native UI. Together, they cover everything a homelab or small production environment needs.
The setup described here scales well: add more Docker workload LXC containers and register them as Portainer environments without touching anything else. Your management plane stays stable while your workload layer grows. With a reverse proxy in front and proper resource limits set on each LXC, you have a production-grade container management stack running on hardware that would struggle to run a single Windows VM.