Self-Host OpenClaw AI Assistant in Proxmox LXC

Learn how to self-host OpenClaw, the open-source local AI assistant, inside a Proxmox VE LXC container. Step-by-step guide for homelab sysadmins.

Proxmox Pulse Proxmox Pulse
11 min read
OpenClaw AI Assistant LXC Containers Proxmox VE Self-Hosting
Glowing LXC container with AI neural network inside, orbited by chat platform icons in a dark server room.

Your laptop is not a server. It sleeps, it travels, and the moment you close the lid your local AI assistant goes dark along with it. Running OpenClaw on a Proxmox VE node changes that entirely — your homelab AI stays online 24/7, never uploads your conversations or files to a third-party cloud, and costs nothing beyond the electricity your server was already burning. If privacy, true ownership of your data, and a persistent assistant that actually remembers your context are things you care about, this guide will get you running.

Why Run OpenClaw in a Proxmox LXC

Proxmox VE gives you two paths for hosting a service like OpenClaw: a full virtual machine or an LXC container. For a gateway-style service, LXC wins on almost every axis.

LXC containers share the host kernel, so you skip the overhead of a full hypervisor, a duplicated OS boot stack, and redundant memory pages. A well-tuned OpenClaw LXC idles at under 200 MB of RAM, compared to 500 MB or more for an equivalent VM. That headroom matters when your Proxmox node is already juggling Home Assistant, a media server, and a handful of other containers.

Snapshot and backup ergonomics are another win. You can freeze the entire container filesystem in seconds with pct snapshot, roll it back instantly if a ClawHub skill install goes sideways, and schedule vzdump backups to fire at 2 AM without ever touching the guest OS. If you later decide to add a local inference engine like Ollama, you can expand the existing LXC or spin up a second one dedicated to model serving — Proxmox even supports GPU passthrough to LXC containers on recent kernels, so that upgrade path is wide open for your openclaw proxmox setup.

Provisioning the LXC Container

Log in to the Proxmox web UI and pull the latest Debian 12 (Bookworm) LXC template from your local template store or a configured repository. Debian 12 is the safest baseline here — it ships with a recent glibc, NodeSource supports it fully, and the package ecosystem is rock-solid.

A sensible baseline for OpenClaw running against a cloud API (Anthropic, OpenAI, or Google):

  • CPU: 2 cores
  • RAM: 4 GB
  • Disk: 20 GB
  • Network: vmbr0 or your LAN bridge, static IP recommended

If you plan to run local inference inside the same container — Ollama with a 7B parameter model, for example — bump RAM to at least 8 GB and disk to 40 GB. Local models are memory-hungry and you will feel the squeeze fast if you underallocate.

Create the container from the Proxmox host shell:

pct create 200 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  --hostname openclaw \
  --cores 2 \
  --memory 4096 \
  --swap 512 \
  --rootfs local-lvm:20 \
  --net0 name=eth0,bridge=vmbr0,ip=dhcp \
  --unprivileged 1 \
  --features nesting=1 \
  --start 1

The --unprivileged 1 flag is the most important security decision you make here. It maps the container's root user to an unprivileged UID on the host, so a container escape does not hand an attacker real root access to your Proxmox node. The --features nesting=1 flag is optional but worth enabling now — if you decide to run Docker-based skills via ClawHub later, you will already have it in place.

Attach to the running container:

pct exec 200 -- bash

Installing OpenClaw

Start with a clean system update inside the container:

apt update && apt upgrade -y
apt install -y curl ca-certificates git

OpenClaw requires Node.js 22.14 as a minimum, but the project recommends Node.js 24. The cleanest way to get it on Debian 12 is via the NodeSource setup script:

curl -fsSL https://deb.nodesource.com/setup_24.x | bash -
apt install -y nodejs
node --version

The version output should read v24.x.x. If you prefer managing Node versions with nvm instead:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 24
nvm use 24

With Node.js in place, run the OpenClaw one-liner installer:

curl -fsSL https://openclaw.ai/install.sh | bash

If you prefer going through npm directly:

npm i -g openclaw

Verify the installation completed successfully:

openclaw --version

If the command is not found, make sure /usr/local/bin or your npm global bin path is on PATH. A quick source ~/.bashrc or a fresh shell session usually resolves it.

First-Run Onboarding

OpenClaw's onboarding wizard handles provider selection, API key entry, and daemon registration in a single interactive session. Run:

openclaw onboard --install-daemon

The --install-daemon flag is what makes this a proper homelab AI deployment — it registers OpenClaw as a systemd service so the gateway starts automatically every time the LXC boots, no manual intervention needed.

The wizard will ask which AI provider you want to use:

  • Anthropic — Claude models, strong reasoning and instruction-following
  • OpenAI — GPT-4o and variants
  • Google — Gemini models via AI Studio
  • Local (Ollama) — No API key required, inference runs on your own hardware

For a first-time local ai proxmox lxc setup, starting with Anthropic or OpenAI is the fastest path to a working assistant. Paste your API key when prompted. You can add additional providers or switch to a local model at any time from the gateway dashboard.

Once the wizard finishes, verify the gateway is healthy:

openclaw gateway status

If the gateway shows as stopped, restart it and check again:

openclaw gateway restart
openclaw gateway status

The openclaw gateway dashboard command opens a local web UI where you can inspect connected chat channels, active sessions, memory state, and installed skills. Bookmark the dashboard URL — you will be back here often.

Connecting Telegram as Your Control Channel

Of all the chat integrations OpenClaw supports — Telegram, Discord, Slack, Signal, iMessage, WhatsApp, Matrix, and Microsoft Teams — Telegram is the fastest to configure and the most practical for a homelab context. No server infrastructure required, just a bot token.

Open Telegram and start a chat with @BotFather. Send /newbot, follow the two-step prompts to name your bot, and copy the token it returns. It will look something like 1234567890:ABCDEFghijklmnop.

Open the OpenClaw gateway dashboard:

openclaw gateway dashboard

Navigate to Channels → Telegram → Add, paste your bot token, and save. The gateway connects immediately — no restart needed.

Test it by opening your new bot in Telegram and sending a message:

Hello, what can you do?

If the gateway is running correctly you will get a response from your configured AI provider within a few seconds. From this point, your Telegram bot is a full-featured interface to your self-host ai assistant — it can browse the web, read and write files on the container filesystem, execute shell commands, maintain persistent memory across conversations, and trigger any skill you install from ClawHub.

Discord and Slack are straightforward second channels to add if you already run homelab infrastructure on those platforms. Both require creating an application in their respective developer portals and pasting a token or webhook URL into the gateway dashboard — the process is well-documented at docs.openclaw.ai/getting-started.

Hardening It for Your Homelab

An AI assistant with shell execution capabilities is a powerful tool. It deserves deliberate hardening before you expose it beyond your local network.

Stay Unprivileged

You already set --unprivileged 1 at creation time. Do not change this. The unprivileged LXC boundary is the most impactful security control you have between OpenClaw and your Proxmox host kernel.

Protect Your API Key

Store your API key only through the path that OPENCLAW_CONFIG_PATH points to. Do not hardcode it in shell scripts, do not drop it in .bashrc, and do not let it appear in any file that could be accidentally committed or shared. OpenClaw also respects OPENCLAW_HOME and OPENCLAW_STATE_DIR if you want to relocate its data directory to a dedicated bind mount or separate disk.

Isolate the Container on Its Own VLAN

Put the OpenClaw LXC on a dedicated VLAN rather than your flat homelab LAN. On Proxmox, assign a VLAN tag to the container's network interface:

pct set 200 --net0 name=eth0,bridge=vmbr0,tag=20,ip=192.168.20.10/24,gw=192.168.20.1

This limits the blast radius if the container is ever compromised — it cannot directly reach your NAS, your router management interface, or other sensitive services without traversing your firewall and its explicit allow rules.

Use Tailscale or WireGuard for Remote Access

Do not port-forward the OpenClaw gateway dashboard to the public internet. Instead, install Tailscale inside the container:

curl -fsSL https://tailscale.com/install.sh | sh
tailscale up

You can now reach the dashboard from anywhere in the world over an encrypted mesh tunnel with no open router ports. If you already run a WireGuard VPN for your homelab, a split-tunnel configuration works equally well.

Review Shell Access Scope

OpenClaw can execute shell commands inside the container it runs on. Before pointing it at sensitive mount points or external storage, review the permission scope in the gateway dashboard. Start with access limited to a sandboxed working directory and expand deliberately as you build confidence in how the assistant uses those capabilities.

Backups and Snapshots

One of the genuine pleasures of running a homelab AI inside Proxmox LXC is that your backup story is built in from day one.

Snapshot Before Risky Changes

Before installing a new ClawHub skill, upgrading OpenClaw, or making config changes, take a named snapshot:

pct snapshot 200 pre-skill-install --description "Before installing browser skill"

If anything breaks, rollback takes seconds:

pct rollback 200 pre-skill-install

This tight feedback loop makes experimenting with new skills genuinely low-risk.

Scheduled vzdump Backups

Set up a recurring vzdump job under Datacenter → Backup in the Proxmox web UI. A nightly run at 2 AM with seven daily copies and four weekly copies retained is a solid default for a service like this. The backup captures the entire LXC — OpenClaw's state directory, configuration, skill data, and the persistent memory it has built up across your sessions.

If you are running Proxmox Backup Server, offloading these backups there gives you deduplication and long-term retention without ballooning storage costs. The persistent memory OpenClaw accumulates over weeks and months becomes genuinely valuable context — treat it with the same care you would any other stateful service.

Troubleshooting Common Issues

Gateway Won't Start

Start with the status command:

openclaw gateway status

If it shows stopped or error, pull recent logs from systemd:

journalctl -u openclaw -n 50 --no-pager

The most common causes are a Node.js version below 22.14, a malformed YAML configuration file, or a port conflict with another service. Run openclaw gateway restart and check status again after about ten seconds.

Node Version Too Old

If you see a SyntaxError: Unexpected token or a warning about unsupported engine versions, your Node.js is too old:

node --version

Anything below v22.14.0 needs to be upgraded. Revisit the NodeSource installation steps and make sure you ran setup_24.x, not an older variant. After reinstalling Node.js, verify openclaw --version resolves correctly before restarting the gateway.

Telegram Bot Is Silent

Work through this checklist in order:

  1. Confirm the gateway is running with openclaw gateway status
  2. Double-check the bot token — a single transposed character breaks authentication silently
  3. Test outbound connectivity from inside the container: curl -s https://api.telegram.org should return a JSON response
  4. If the LXC is on a restricted VLAN, verify your firewall allows outbound port 443 to Telegram's API servers

Most silent bot issues trace back to either a wrong token or a missing firewall rule on a locked-down VLAN.

Running Out of RAM With a Local Model

If you added Ollama and a quantized model and the container starts swapping heavily, you have two clean options. The faster fix is to increase the LXC memory allocation live from the Proxmox UI — no container restart is required for most configurations. The cleaner long-term fix is to move Ollama to a dedicated LXC, keeping OpenClaw's gateway lightweight and independently scalable. The two containers communicate over the internal Proxmox network bridge, and the latency is negligible on local hardware.

Conclusion

Running OpenClaw on a Proxmox LXC is one of the most satisfying things you can do with spare homelab capacity. You end up with a private, always-on AI assistant that works across all your chat apps, can browse the web, manage files, and run commands — and remembers everything across sessions without any of that context leaving your hardware.

The LXC model keeps resource overhead minimal, snapshots make skill experimentation genuinely safe, and VLAN isolation keeps your network architecture clean. The path from a fresh Proxmox host to a Telegram-connected self-host ai assistant is surprisingly short: provision the container, install Node.js 24, run the curl installer, and complete the onboarding wizard. Everything else in this guide is about making it robust.

The homelab AI rabbit hole goes as deep as you want to take it — local models, custom skills, multi-channel integrations, automated workflows. OpenClaw gives you a solid, open-source foundation to build on, entirely on your own terms.

Share
Proxmox Pulse

Written by

Proxmox Pulse

Sysadmin-driven guides for getting the most out of Proxmox VE in production and homelab environments.

Related Articles

View all →