Proxmox LXC Resource Limits: CPU, Memory, and Disk I/O
Set CPU, memory, and disk I/O limits on Proxmox LXC containers using cgroups v2. Real pct commands, hook scripts, and hard-learned pitfalls — most apply live without a restart.
On this page
Setting resource limits on Proxmox LXC containers is one of those tasks that pays dividends the first time a bulk backup job, a Nextcloud sync, or a rogue cron script saturates your host. By the end of this guide you'll know exactly which pct options map to which cgroup v2 controls, how to apply most of them live without a container restart, and which edge cases catch people off guard on Proxmox VE 9.1.
Key Takeaways
- CPU limit vs. CPU units:
--cpulimitcaps absolute CPU time (2.0 = max 2 physical core-equivalents);--cpuunitscontrols relative scheduling priority between containers under contention. - Memory is a hard wall: When a container hits its
--memoryceiling, the OOM-killer fires — not graceful throttling. Set limits with headroom. - Live application:
--cpulimit,--memory, and--swaptake effect immediately withpct set— no container restart needed. - cgroups v2 paths changed: Proxmox VE 8+ uses the unified cgroups v2 hierarchy. I/O throttling uses
io.max, not the v1blkio.throttle.*paths you'll find in older tutorials. - Measure before you cap:
pct monitor <ctid>shows real-time consumption; tune from data, not guesses.
How Proxmox LXC Resource Controls Work Under the Hood
LXC containers on Proxmox are namespaced processes sharing the host kernel — no hypervisor overhead, but also no hardware isolation. Resource limits are enforced entirely by Linux cgroup v2. When you run pct set 101 --cpulimit 2, Proxmox writes to /sys/fs/cgroup/lxc/101/cpu.max and the kernel scheduler does the rest.
Each container gets its own cgroup slice. On Proxmox VE 9.1 you can inspect the full hierarchy:
# Inspect the cgroup tree for container 101
systemd-cgls /sys/fs/cgroup/lxc/101
Three resource classes matter for most workloads:
- CPU — absolute quota and relative scheduling weight
- Memory — hard ceiling, swap budget, and soft pressure hints
- Block I/O — bandwidth and IOPS caps per device
All three can be configured persistently via pct set. Changes survive reboots because Proxmox writes them back to /etc/pve/lxc/<ctid>.conf. The one exception is I/O throttling — direct cgroup writes don't survive container restarts, which is why hook scripts matter.
How to Set CPU Limits on LXC Containers
The Difference Between Cores, CPU Limit, and CPU Units
These three settings look similar in the Proxmox UI but control completely different scheduler knobs:
| Option | cgroup v2 knob | What it actually does |
|---|---|---|
--cores N |
cpuset.cpus |
Container sees N vCPU threads |
--cpulimit N |
cpu.max |
Hard cap: N × 100% of one physical core |
--cpuunits N |
cpu.weight |
Relative scheduling priority (default: 1024) |
--cpulimit is the ceiling that actually prevents CPU saturation. Setting it to 2.0 means the container can consume at most 200% CPU — two full core-equivalents of wall-clock time — regardless of how many cores the host has or how many the container can see via --cores.
# Cap container 101 to 1.5 cores worth of CPU — applies immediately
pct set 101 --cpulimit 1.5
# Lower scheduling priority of a bulk-processing background container
pct set 102 --cpuunits 256
# Pin a latency-sensitive container to 2 threads with a matching hard cap
pct set 103 --cores 2 --cpulimit 2
Gotcha from experience: On a 16-core host with four containers each set to --cores 4, all four can simultaneously peg all their cores if no --cpulimit is configured. I watched a Jellyfin container doing 4K transcodes bring a database container to its knees this way. Always pair --cores with --cpulimit for workloads you don't fully trust.
Verifying the CPU Limit Took Effect
# cpu.max format is: quota period (in microseconds)
# 150000 100000 = 150ms per 100ms window = 1.5 cores
cat /sys/fs/cgroup/lxc/101/cpu.max
# Or read directly from the Proxmox config
pct config 101 | grep cpu
The cpu.stat file shows cumulative throttled time — useful for confirming a container is actually hitting its limit:
cat /sys/fs/cgroup/lxc/101/cpu.stat | grep throttled
# throttled_usec 14230891 ← non-zero means the cap is being enforced
Configuring Memory and Swap in Proxmox LXC
Memory configuration uses two knobs that work together:
# Give container 105 2 GB RAM and 512 MB swap
pct set 105 --memory 2048 --swap 512
The part that trips people up: --swap is additive, not a total budget. The container above gets 2048 MB RAM plus 512 MB of swap space on top — 2560 MB of total virtual memory. If you set --memory 2048 --swap 0, the container has exactly 2048 MB and no swap at all.
For latency-sensitive workloads like databases, disable swap entirely:
# Database container: 4 GB hard limit, no swap, no latency spikes from swapping
pct set 106 --memory 4096 --swap 0
Why the OOM-Killer Fires Instead of Throttling
When a container hits its memory ceiling, the Linux kernel doesn't pause it or throttle allocations — it runs the OOM-killer and terminates the process with the highest oom_score inside that cgroup. For stateless services (nginx, Redis with maxmemory set, Prometheus), this is usually survivable. For PostgreSQL or any workload with a write-ahead log, it can corrupt data mid-write.
The practical rule: set --memory at least 20-30% above your measured working set, and monitor memory.current before tightening:
# Current RSS for container 106, human-readable
cat /sys/fs/cgroup/lxc/106/memory.current | numfmt --to=iec
# Example output: 1.8G
Soft Memory Pressure with memory.low
Proxmox doesn't expose a soft memory limit in the UI, but cgroup v2's memory.low knob is available directly. Writing to it tells the kernel to evict other containers' pages before touching this container's working set under host memory pressure:
# Protect container 105's working set below 1.5 GB from host reclaim
echo $((1536 * 1024 * 1024)) > /sys/fs/cgroup/lxc/105/memory.low
This is a hint, not a guarantee — but it meaningfully improves behavior when you're running a mix of critical services and background batch jobs on the same host.
Disk I/O Throttling: The Feature the Proxmox UI Skips
Per-container I/O throttling is absent from the Proxmox VE 9.1 web interface, but cgroup v2's io.max interface is fully functional. Without it, a container running a bulk rsync or a backup agent can saturate your storage bus and cause latency spikes in every other container and VM — the kind of thing that's hard to diagnose after the fact.
# Find the major:minor device number of your storage pool's block device
lsblk -no MAJ:MIN /dev/nvme0n1
# Example output: 259:0
# Cap container 101 to 50 MB/s read, 30 MB/s write, 3000 read IOPS, 1000 write IOPS
echo "259:0 rbps=52428800 wbps=31457280 riops=3000 wiops=1000" \
> /sys/fs/cgroup/lxc/101/io.max
Direct cgroup writes vanish on container restart. Persist them with a hook script:
# Add this line to /etc/pve/lxc/101.conf
hookscript: local:snippets/iolimit-101.sh
#!/bin/bash
# /var/lib/vz/snippets/iolimit-101.sh
CTID=$1
PHASE=$2
if [ "$PHASE" = "post-start" ]; then
sleep 1 # cgroup needs a moment to initialize
DEVNO=$(lsblk -no MAJ:MIN /dev/pve/vm-${CTID}-disk-0 2>/dev/null || echo "259:0")
echo "${DEVNO} rbps=52428800 wbps=31457280 riops=3000 wiops=1000" \
> /sys/fs/cgroup/lxc/${CTID}/io.max
fi
chmod +x /var/lib/vz/snippets/iolimit-101.sh
Important gotcha: io.max applies to all block I/O the container generates — including reads and writes that go through bind mounts from the host filesystem. If you're running Docker inside an LXC container on Proxmox, all Docker layer pulls and container writes count against the same limit. A 30 MB/s write cap will throttle your Docker image builds too. Size your limits with that in mind.
Older tutorials use the wrong paths: cgroups v1 used blkio.throttle.read_bps_device and blkio.throttle.write_bps_device. Those paths don't exist on Proxmox VE 9.x. If a guide shows those paths, it was written for Proxmox VE 7 or older.
Monitoring Real-Time Resource Usage
Before setting any limits, spend a few minutes under real workload watching actual consumption. The Proxmox web UI averages over 60-second windows and will miss short bursts entirely.
# Real-time stats for container 101, refreshes every second
pct monitor 101
For a host-wide snapshot of all running containers:
for CTID in $(pct list | awk 'NR>1 && $2=="running" {print $1}'); do
echo "=== CT ${CTID} ==="
printf " CPU throttled_usec: "
awk '/throttled_usec/ {print $2}' /sys/fs/cgroup/lxc/${CTID}/cpu.stat 2>/dev/null
printf " Memory current: "
cat /sys/fs/cgroup/lxc/${CTID}/memory.current 2>/dev/null | numfmt --to=iec
done
For I/O accounting, io.stat gives cumulative bytes and operations per device:
cat /sys/fs/cgroup/lxc/101/io.stat
# 259:0 rbytes=2048000000 wbytes=512000000 rios=150000 wios=40000 ...
A non-zero and growing throttled_usec in cpu.stat confirms a CPU limit is actively being enforced. If you're not seeing throttling but the container still feels slow, the bottleneck is elsewhere — check I/O wait with iostat -x 1 on the host.
Applying Limits in Bulk via the Proxmox API
Managing limits one container at a time with pct set is fine for a handful of containers. For a homelab with a dozen LXCs, or a production cluster with many more, pvesh handles it cleanly:
# Apply a "low-priority background" profile to containers 200 through 209
for CTID in $(seq 200 209); do
pvesh set /nodes/pve/lxc/${CTID}/config \
--cpulimit 0.5 \
--cpuunits 256 \
--memory 512 \
--swap 256
done
pvesh takes the same parameters as pct set and works identically — the difference is that pvesh talks to the Proxmox REST API directly, so you can run it remotely or embed it in CI pipelines. If your infrastructure is already managed as code with Ansible playbooks for Proxmox, the community.general.proxmox module accepts cpus, cpuunits, memory, and swap as task parameters — same idempotent workflow, no custom scripting required.
Common Pitfalls When Setting LXC Resource Limits
--cpulimit throttles even on an idle host. Once set, the kernel enforces the CPU cap regardless of whether other containers are competing. If you have a weekly report generator that needs to run fast, use --cpuunits to raise its priority instead — that only activates under contention, not when the host is idle.
Swap on ZFS volumes doubles ARC pressure. If your LXC root disk is a zvol on a ZFS pool, container swap I/O goes through ZFS, which creates its own ARC churn. For ZFS-backed containers, --swap 0 is almost always correct — compensate with sufficient --memory instead.
Limits don't cover in-kernel work done on behalf of the container. If your container generates heavy NFS traffic or triggers ZFS prefetch, those kernel threads run outside the container's cgroup. You can see a container's CPU limit enforced at 1.0 while the host shows high %sys from ksoftirqd or nfsd. This is expected kernel behavior with no simple workaround — it's a characteristic of the container model, not a bug in your configuration.
Resource limits are also a security boundary. An unprivileged LXC with no CPU or memory cap can run a trivial fork bomb and degrade every other tenant on the host. Setting conservative defaults — even for containers you trust — is a meaningful layer of defense that works alongside the network and access controls covered in Hardening Proxmox VE: Firewall, fail2ban, and SSH Security. This is worth it any time you run more than two or three containers on a host.
Recommended Baseline Settings by Workload
Start from these values and adjust after a week of pct monitor observation:
| Workload | --cpulimit |
--cpuunits |
--memory |
--swap |
|---|---|---|---|---|
| Web server (nginx/Caddy) | 1.0 | 1024 | 512 MB | 256 MB |
| Database (Postgres/MariaDB) | 2.0 | 2048 | 2048 MB | 0 |
| Monitoring (Prometheus) | 1.0 | 768 | 1024 MB | 512 MB |
| Media server (Jellyfin) | 4.0 | 512 | 2048 MB | 1024 MB |
| Backup agent (restic/borgmatic) | 0.5 | 128 | 256 MB | 256 MB |
| Dev/test (low priority) | 0.5 | 256 | 512 MB | 512 MB |
The --cpuunits values are relative — only their ratios matter. A container at 2048 gets twice the scheduler slices of one at 1024 during CPU contention. On an idle host, both run unrestricted regardless of their --cpuunits value.
The backup agent row deserves special attention: backup containers are the most common culprit for host-wide slowdowns. Capping a restic or borgmatic container to 0.5 cores and slow I/O means your automated Proxmox Backup Server jobs finish a bit later — but your production containers stay responsive throughout the backup window.
Conclusion
With CPU limits, memory ceilings, and I/O throttling in place, LXC containers become proper tenants on your Proxmox host rather than free-range processes competing for the same resources. The combination of pct set for persistent CPU and memory configuration and hook scripts for I/O throttling covers everything the web UI doesn't expose — and most of it applies live without touching a running workload. The logical next step is measuring the impact over time: wire up per-container cgroup metrics in your monitoring stack so you can see when a container is chronically hitting its CPU quota and needs its limit raised, or when it's consistently well under and the headroom can be reclaimed.