Proxmox LXC Bind Mounts: Share Host Paths with Containers
Configure Proxmox LXC bind mounts to share host directories with containers, fix UID/GID mapping in unprivileged containers, and avoid permission pitfalls.
On this page
Bind mounts let an LXC container read from and write to a directory that lives on the Proxmox host — same data, no copying, no NFS required. In under ten minutes you can have a container writing logs, media files, or database dumps directly to a host path you control. This guide covers the Proxmox VE 9.1 web UI method, the config-file method, and the UID/GID remapping issue that trips up almost everyone the first time they work with an unprivileged container.
Key Takeaways
- How it works: A bind mount makes a host directory appear at a specific path inside the container — changes in either location are immediate and atomic.
- UID/GID shift: Unprivileged containers map container UID 0 → host UID 100000 by default; the host directory must be owned by the shifted UID or writes fail with permission errors.
- Config syntax: Bind mounts appear as
mp0,mp1, etc. in/etc/pve/lxc/<VMID>.conf— for examplemp0: /host/path,mp=/container/path. - Privileged containers skip the shift: A privileged container avoids UID remapping but trades namespace isolation for convenience.
- Exclude from backups: Add
backup=0to any large mount point entry to keep media libraries out ofvzdumparchives.
What Are LXC Bind Mounts and When Do You Need Them
LXC containers share the Proxmox host kernel but are isolated by namespaces and cgroups. That makes them fast to start and cheap on RAM, and it also means they can safely access host directories if you configure mount points correctly.
You reach for a bind mount when:
- Multiple containers need access to the same dataset — a media library read by Jellyfin and written by Sonarr simultaneously
- You want application data outside the container rootfs so it survives
pct restoreorpct destroy - You are running Docker inside an LXC container and want the Docker volume data on a host path you can snapshot — exactly the setup described in Running Docker Inside LXC Containers on Proxmox
- You want to snapshot application data independently via ZFS without snapshotting the whole container disk image
A bind mount is not a copy. Write from the container, the host sees the change immediately. Delete a file from the host side, the container loses it. Plan your data layout before you start.
How to Add a Bind Mount from the Proxmox UI
In Proxmox VE 9.1, mount points live in the container's Resources tab.
- Select the container in the left pane, click Resources → Add → Mount Point.
- Set Storage to
Directoryand enter the Host Path — an absolute path to an existing directory on the Proxmox host (e.g.,/mnt/data/media). - Set Mount Point to the path where it should appear inside the container (e.g.,
/media). - Optionally tick Read-only to prevent container writes to the host path.
- Click Add, then restart the container: More → Reboot or run
pct reboot <VMID>from the shell.
Gotcha: The UI will not create the host directory for you. If the path does not exist on disk, Proxmox will accept the config but the container will fail to start with a vague
lxc-starterror. Create it first:
mkdir -p /mnt/data/media
Configuring Bind Mounts via the LXC Config File
For scripted deployments, editing the config directly is faster and easier to put under version control.
nano /etc/pve/lxc/101.conf
Add a mount point entry at the bottom:
mp0: /mnt/data/media,mp=/media
Multiple mount points use mp0 through mp9:
mp0: /mnt/data/media,mp=/media
mp1: /mnt/data/config,mp=/config,ro=1
Full set of options supported in Proxmox VE 9.1:
| Option | Example | Effect |
|---|---|---|
mp= |
mp=/data |
Container-side mount path (required) |
ro=1 |
ro=1 |
Mount read-only inside the container |
backup=0 |
backup=0 |
Exclude this path from vzdump backups |
replicate=0 |
replicate=0 |
Skip during ZFS or PBS replication |
shared=1 |
shared=1 |
Mark as cluster-shared storage |
After editing the config, restart the container and verify the mount:
pct reboot 101
pct exec 101 -- df -h /media
The UID/GID Remapping Problem in Unprivileged Containers
This is where almost everyone gets burned the first time. Unprivileged LXC containers use a UID/GID mapping defined in /etc/subuid and /etc/subgid on the host. Proxmox ships with:
cat /etc/subuid
# root:100000:65536
This means container UID 0 (root) maps to host UID 100000, container UID 1000 maps to host UID 101000, and so on. When a container process writes to a bind-mounted host directory, the host kernel sees host UID 101000, not UID 1000. The directory permission check happens against the shifted UID.
Calculating the Mapped Host UID
# Inside the container, check the running user:
id
# uid=1000(ubuntu) gid=1000(ubuntu)
# On the host, that container UID maps to:
# host_uid = 100000 + container_uid = 101000
Option 1: Chown the Host Directory to the Shifted UID
The cleanest fix — change ownership on the host to the mapped UID:
chown -R 101000:101000 /mnt/data/media
Container processes running as UID 1000 can now read and write the directory transparently, with no special config beyond the mount point entry itself.
Option 2: Use POSIX ACLs for Shared Paths
ACLs are more surgical when multiple containers or host users need access to the same path:
# Install acl if not already present (Proxmox host is Debian-based)
apt install acl
# Grant the container's mapped UID read/write/execute access
setfacl -m u:101000:rwx /mnt/data/media
# Default ACL so new files and subdirectories inherit the rule
setfacl -d -m u:101000:rwx /mnt/data/media
Option 3: Map Container Root to Host Root
For containers where the container's root user needs to own host files, add a custom UID map to the container config. This is a targeted override, not a global switch:
lxc.idmap: u 0 0 1
lxc.idmap: u 1 100001 65535
lxc.idmap: g 0 0 1
lxc.idmap: g 1 100001 65535
This maps container UID 0 to host UID 0 while keeping all other UIDs shifted. The host directory just needs to be owned by root:
chown root:root /mnt/data/special
chmod 755 /mnt/data/special
Security note: Mapping container root to host root reduces namespace isolation — a compromised container root can affect any root-owned bind-mounted path on the host. This is acceptable for trusted internal workloads. For the broader security picture on LXC and Proxmox isolation, see Hardening Proxmox VE: Firewall, fail2ban, and SSH Security.
Bind Mounts in Privileged Containers
Privileged containers (unprivileged: 0 in the config) have no UID remapping — container UID 1000 is host UID 1000. Standard Unix permissions apply directly:
chown -R 1000:1000 /mnt/data/media
Privileged containers are the simpler path when you are running Docker inside LXC. Docker's overlay2 storage driver needs real root access, so the LXC container must be privileged anyway. In that setup — like the Portainer and Dockge workflow described in Managing Docker on Proxmox with Portainer and Dockge — bind-mounting Docker volume directories from the host just works without any UID arithmetic.
Real-World Configs That Work
Media Library Shared Across Two Containers
Host ZFS dataset /mnt/tank/media mounted read-only into Jellyfin, read-write into Sonarr, with both excluded from vzdump:
# /etc/pve/lxc/200.conf (Jellyfin — consumer)
mp0: /mnt/tank/media,mp=/media,ro=1,backup=0
# /etc/pve/lxc/201.conf (Sonarr — writer)
mp0: /mnt/tank/media,mp=/media,backup=0
# Both containers run as UID 1000 internally
chown -R 101000:101000 /mnt/tank/media
Docker Data Directory on a Host ZFS Dataset
Run Docker inside a privileged LXC but keep /var/lib/docker on a ZFS dataset you can snapshot independently:
# /etc/pve/lxc/300.conf (privileged container: unprivileged: 0)
mp0: /mnt/ssd/docker-data,mp=/var/lib/docker
chown root:root /mnt/ssd/docker-data
chmod 710 /mnt/ssd/docker-data
Expect Docker to initialize its overlay2 storage driver the first time the container starts — about 10 to 15 seconds on a fresh ZFS dataset before the daemon comes up.
Read-Only Config Injection
Manage application configs centrally on the host; containers pick up changes on next restart:
mp0: /mnt/configs/nginx,mp=/etc/nginx,ro=1
mp1: /mnt/configs/app,mp=/app/config,ro=1
This pattern works well for CI/CD pipelines where Ansible or a deploy script writes the host directory and container restarts pull in the new config.
Troubleshooting Bind Mount Failures
Container fails to start, errors in the journal
journalctl -u pve-container@101.service --no-pager | tail -40
The most common cause is the host directory not existing or a typo in the config path. Verify:
ls -la /mnt/data/media
Permission denied inside the container
Check the effective ownership from the host:
ls -lan /mnt/data/media
# Owner UID should match 100000 + container_uid
Fix it:
chown -R 101000:101000 /mnt/data/media
Files created inside the container have large UIDs when viewed from the host
Expected behavior for unprivileged containers. UID 101000 on the host is UID 1000 inside the container. Use chown with the mapped UID when you need to manipulate these files from the host side.
Mount not present after pct restore
pct restore rebuilds the container config from the backup archive. Mount point entries added after the backup was taken are not included. Re-add the mp lines to the config manually, or ensure you capture the config file as part of your backup procedure. For a robust backup strategy that covers both containers and host datasets, Automated Backups with Proxmox Backup Server lays out a production-grade approach.
vzdump backups are enormous
A bind-mounted media library can balloon a container backup from 2 GB to 2 TB. Add backup=0 to the mount point line:
mp0: /mnt/tank/media,mp=/media,ro=1,backup=0
Back up the host dataset separately via PBS or ZFS send.
Conclusion
Bind mounts in Proxmox LXC are straightforward once you have the UID shift internalized: for unprivileged containers, chown 101000:101000 on the host directory is the fix for nearly every permission error you will encounter. Add mp0: /host/path,mp=/container/path to the container config, restart, and verify with pct exec. The pattern scales cleanly to a dozen containers sharing the same ZFS datasets. Your next step: put those host directories on a ZFS dataset with hourly snapshots — five minutes of work that gives you point-in-time recovery for all your container data without touching the containers themselves.