{
    "version": "https://jsonfeed.org/version/1",
    "title": "Proxmox Pulse",
    "home_page_url": "https://proxmoxpulse.com",
    "description": "In-depth Proxmox VE tutorials, tips, and best practices for homelab enthusiasts and system administrators. Covers installation, VMs, LXC containers, storage, networking, and more.",
    "icon": "https://proxmoxpulse.com/images/og-default.png",
    "author": {
        "name": "Proxmox Pulse",
        "url": "https://proxmoxpulse.com"
    },
    "items": [
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-ldap-active-directory-authentication/",
            "content_html": "\nConnecting Proxmox VE to your existing LDAP directory or Active Directory domain means every admin logs in with their corporate credentials — no separate Proxmox password to juggle, no shared `root@pam` account floating around, and a proper audit trail in `/var/log/auth.log` showing exactly who authenticated and when. By the end of this guide, you'll have a working LDAP realm in Proxmox VE 9.1, users synchronized from your directory, and AD security groups mapped directly to Proxmox roles.\n\n## Key Takeaways\n\n- **Realm types**: Proxmox supports PAM, PVE, LDAP, Active Directory, and OIDC — each with different trust models and sync capabilities.\n- **Sync is optional but useful**: You can authenticate against LDAP without pre-syncing users, but syncing lets you assign roles via the GUI and see usernames in the audit log.\n- **Groups map to roles**: Assign an entire AD group to a Proxmox role once; every member inherits the permission at the specified resource path.\n- **Keep a local escape hatch**: Always maintain a tested local `root@pam` or `admin@pve` account — if LDAP becomes unreachable, you need a way back in.\n- **TLS is non-negotiable**: Use LDAPS (port 636) or STARTTLS; plain LDAP on port 389 sends credentials in cleartext.\n\n## Why Centralized Auth Beats Local Proxmox Users\n\nThe default Proxmox setup gives you `root@pam` backed by Linux PAM and local users backed by Proxmox's own PVE database. Both work fine for a single-node homelab where you're the only admin. Scale to a team of three, or expand into a [multi-node Proxmox private cloud](/articles/build-private-cloud-home-proxmox-ve/), and the cracks show fast:\n\n- Password rotation means touching every node and every user manually\n- You have no idea which \"admin\" shut down a production VM at 2 AM\n- Onboarding and offboarding requires logging into Proxmox specifically, not your IdP\n\nLDAP integration solves all three. It also lets you reuse existing group structures — your \"Infrastructure Admins\" AD group becomes a Proxmox Administrator role assignment in about 60 seconds.\n\n**Active Directory vs plain LDAP**: The Proxmox realm type labeled \"Active Directory\" in the GUI is still LDAP under the hood, but it pre-fills sane defaults for Microsoft's schema (`sAMAccountName` attribute, `DC=` base DN format, Kerberos realm field). If you're running OpenLDAP, FreeIPA, or Authentik with an LDAP backend, use the generic \"LDAP\" realm type instead.\n\n## Prerequisites: What You Need Before You Start\n\nBefore touching the Proxmox GUI, gather these details:\n\n- **LDAP server hostname** — a domain controller FQDN for AD, or your OpenLDAP server address. Use the FQDN, not an IP — it matters for TLS certificate validation.\n- **Base DN** — e.g., `DC=corp,DC=example,DC=com` for AD, or `dc=example,dc=org` for OpenLDAP.\n- **Bind account** — a read-only service account in AD (e.g., `svc-proxmox`) with permission to read users and groups. Do not use a Domain Admin for this.\n- **Bind account DN or UPN** — e.g., `svc-proxmox@corp.example.com` (UPN format works cleanly for AD).\n- **Bind password** — stored encrypted by Proxmox under `/etc/pve/priv/`, but still: use a long, random password for this service account.\n- **CA certificate** — if you're using LDAPS with a private CA, you need the certificate chain in PEM format.\n\nCreate the bind account on the AD side first:\n\n```powershell\n# Run on a Windows Server domain controller\nNew-ADUser -Name \"svc-proxmox\" `\n  -SamAccountName \"svc-proxmox\" `\n  -UserPrincipalName \"svc-proxmox@corp.example.com\" `\n  -Path \"OU=Service Accounts,DC=corp,DC=example,DC=com\" `\n  -AccountPassword (ConvertTo-SecureString \"YourLongRandomPassword!\" -AsPlainText -Force) `\n  -PasswordNeverExpires $true `\n  -Enabled $true\n```\n\nBy default, Domain Users can read user and group objects in AD, so `svc-proxmox` being a Domain User is enough — no special delegation required.\n\n## How to Add an LDAP Realm in Proxmox VE\n\n### Open the Realm Configuration\n\nIn the Proxmox web UI, navigate to **Datacenter → Permissions → Authentication**. You'll see the two default realms (`pve` and `pam`). Click **Add** and choose **Active Directory Server** or **LDAP Server** depending on your directory type.\n\n### Fill in Server and Bind Details\n\nFor Active Directory, the form fields map like this:\n\n| Field | Example Value | Notes |\n|-------|--------------|-------|\n| Realm | `corp` | Short name used at login: `user@corp` |\n| Base Domain Name | `corp.example.com` | Proxmox derives the base DN automatically |\n| Server | `dc01.corp.example.com` | Use FQDN — required for TLS |\n| Port | `636` | LDAPS; use `389` + STARTTLS if LDAPS is unavailable |\n| User Attribute | `sAMAccountName` | For AD; use `uid` for OpenLDAP or FreeIPA |\n| Domain | `corp.example.com` | Kerberos realm field (AD only) |\n| Bind User | `svc-proxmox@corp.example.com` | UPN format works cleanly for AD |\n| Bind Password | `YourLongRandomPassword!` | Stored encrypted in `/etc/pve/priv/` |\n\nFor OpenLDAP or FreeIPA, the equivalent settings:\n\n```yaml\n# OpenLDAP / FreeIPA realm values\nBase DN:       dc=example,dc=org\nServer:        ldap.example.org\nPort:          636\nUser Attr:     uid\nBind DN:       cn=svc-proxmox,ou=serviceaccounts,dc=example,dc=org\nBind Password: YourLongRandomPassword!\n```\n\n### Import the CA Certificate for TLS\n\nIf your LDAP server uses a certificate signed by a private CA — which is almost always the case in enterprise Active Directory environments — import that CA certificate into Proxmox's trust store before saving the realm:\n\n```bash\n# Copy your CA cert (PEM format) to the system trust store\ncp /path/to/corp-ca.pem /usr/local/share/ca-certificates/corp-ca.crt\nupdate-ca-certificates\n```\n\nAfter importing, Proxmox will verify the LDAPS certificate against the system trust store. Without this step, you're forced to disable certificate verification — acceptable on a homelab, never in production.\n\n**Gotcha**: If your AD uses an intermediate CA, you need the full chain, not just the root. Export it from Active Directory Certificate Services:\n\n```powershell\n# On a domain controller — exports the full issuing chain\ncertutil -ca.cert corp-ca-chain.crt\n```\n\nSCP that file to your Proxmox node and run `update-ca-certificates` again.\n\n### Configure Sync Attributes\n\nIn the realm editor, switch to the **Sync** tab and set:\n\n- **Sync Attributes**: `cn,mail,sAMAccountName` for AD (or `cn,mail,uid` for OpenLDAP)\n- **User Classes**: `user` for AD (or `inetOrgPerson` for OpenLDAP)\n- **Group Classes**: `group` for AD (or `groupOfNames` / `posixGroup` for OpenLDAP)\n- **Group DN**: the OU where your infra groups live, e.g., `OU=Infra Groups,DC=corp,DC=example,DC=com`\n\nSave the realm. If Proxmox reports a bind error immediately, double-check the bind account UPN and password — don't proceed until the realm saves cleanly.\n\n## How to Sync Users and Groups from Active Directory\n\n### Running the First Sync\n\nTrigger a manual sync from the CLI rather than the GUI — the CLI output is more informative:\n\n```bash\n# Sync both users and groups from the 'corp' realm\npveum realm sync corp --enable-new true --purge false --scope both\n```\n\n`--scope both` pulls users and groups. `--enable-new true` marks newly synced users as enabled in Proxmox. `--purge false` leaves existing Proxmox users untouched if they no longer appear in LDAP — safer for a first run.\n\nExpected output:\n\n```\nsyncing realm 'corp'...\nsynced 47 users\nsynced 12 groups\ndone\n```\n\nVerify in the GUI under **Datacenter → Permissions → Users** — your AD users appear with the `@corp` suffix, and groups appear under **Datacenter → Permissions → Groups**.\n\n### Scheduling Automatic Syncs\n\nDon't rely on manual syncs for production. Add a cron job to keep Proxmox in sync with your directory:\n\n```bash\n# /etc/cron.d/proxmox-ldap-sync\n# Sync the corp realm every 4 hours\n0 */4 * * * root /usr/bin/pveum realm sync corp --enable-new true --purge false --scope both >> /var/log/proxmox-ldap-sync.log 2>&1\n```\n\nWith `--purge false`, users removed from AD won't lose Proxmox access until you enable purge or manually disable them. In environments with strict offboarding requirements, run a weekly purge job as well:\n\n```bash\n# Weekly purge — removes Proxmox users no longer in LDAP\n0 5 * * 0 root /usr/bin/pveum realm sync corp --purge true --scope users >> /var/log/proxmox-ldap-sync.log 2>&1\n```\n\nPair this with [SSH hardening and fail2ban](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) to make sure stale accounts left behind by a delayed purge cycle can't be exploited.\n\n## Mapping AD Groups to Proxmox Roles\n\nThis is where centralized auth pays off. Instead of assigning roles to individual users, assign them to groups — Proxmox respects LDAP group membership for permission decisions.\n\nFirst, confirm groups appeared after the sync:\n\n```bash\npveum group list\n```\n\nThen assign roles to groups at the appropriate resource path:\n\n```bash\n# 'infra-admins' AD group gets full Administrator access cluster-wide\npveum acl modify / --group 'infra-admins' --role Administrator\n\n# 'dev-team' gets VM admin rights only in the dev resource pool\npveum acl modify /pool/dev-pool --group 'dev-team' --role PVEVMAdmin\n\n# Read-only auditors can view everything, change nothing\npveum acl modify / --group 'proxmox-readonly' --role PVEAuditor\n```\n\nThe built-in roles worth knowing:\n\n| Role | What It Allows |\n|------|---------------|\n| `Administrator` | Full cluster access — treat like root for Proxmox operations |\n| `PVEAdmin` | Manage VMs, storage, networks — no user or realm management |\n| `PVEVMAdmin` | Full VM lifecycle — no node or storage administration |\n| `PVEVMUser` | Start, stop, and open console — no config changes |\n| `PVEAuditor` | Read-only view of the entire cluster |\n| `PVEDatastoreAdmin` | Manage backups and storage pools — useful for dedicated backup operators |\n\nIf none of these fit, create a custom role:\n\n```bash\npveum role add StorageOperator \\\n  --privs \"Datastore.Audit,Datastore.AllocateSpace,Datastore.AllocateTemplate\"\n```\n\n## Testing and Verifying Authentication End-to-End\n\nBefore announcing the change to your team, verify the full chain works from the Proxmox host itself:\n\n```bash\n# Test the LDAP bind directly (install if missing: apt-get install -y ldap-utils)\nldapsearch -H ldaps://dc01.corp.example.com \\\n  -D \"svc-proxmox@corp.example.com\" \\\n  -w \"YourLongRandomPassword!\" \\\n  -b \"DC=corp,DC=example,DC=com\" \\\n  \"(sAMAccountName=testuser)\" cn mail\n```\n\nA successful bind returns `cn` and `mail` attributes for the test user. Common errors:\n\n- `ldap_bind: Invalid credentials (49)` — wrong bind DN or password\n- `Can't contact LDAP server` — firewall blocking port 636 or DNS failure; test reachability with:\n\n```bash\nnc -zv dc01.corp.example.com 636\n```\n\n- `TLS: hostname does not match` — the server cert CN or SAN doesn't match the hostname you configured; use the FQDN that matches the certificate\n\nThen test interactively: log out of the Proxmox GUI, select the `corp` realm in the login dropdown, and authenticate as a known AD user. If you're using [Ansible playbooks to manage your Proxmox cluster](/articles/automate-proxmox-ansible-vm-playbooks/), add an LDAP connectivity check task to your pre-upgrade playbook — LDAP failures discovered mid-upgrade are painful.\n\nIf realm config changes aren't taking effect, restart `pvedaemon`:\n\n```bash\nsystemctl restart pvedaemon\n```\n\n## What to Do When LDAP Auth Breaks\n\nThe most common failure scenario: the LDAP server becomes unreachable, or the bind account password expires, and suddenly nobody can log in. This is exactly why you keep a local admin account active and tested.\n\n```bash\n# Log in as root@pam via console, iDRAC, iLO, or IPMI\n# Then check configured realms\npveum realm list\n\n# Disable the broken realm while you investigate\n# (existing sessions continue; new logins to this realm fail with a clean error)\npveum realm modify corp --disable true\n\n# Re-enable once the LDAP issue is resolved\npveum realm modify corp --disable false\n```\n\nCommon root causes when LDAP breaks suddenly:\n\n1. **Bind account password expired** — check in AD:\n```powershell\nSearch-ADAccount -PasswordExpired -UsersOnly | Select-Object Name, PasswordExpiredAt\n```\n\n2. **Certificate expired** — check the LDAPS cert expiry from the Proxmox host:\n```bash\necho | openssl s_client -connect dc01.corp.example.com:636 2>/dev/null \\\n  | openssl x509 -noout -dates\n```\n\n3. **Domain controller unreachable** — DNS change, firewall rule update, or DC maintenance window\n\n4. **Realm config corrupted after a Proxmox upgrade** — inspect `/etc/pve/domains.cfg` directly for garbled entries; the file is plain text and editable\n\nActive sessions are unaffected when LDAP goes down — Proxmox session tokens aren't re-validated against LDAP on every request. Anyone already logged in continues working until their session expires (default timeout is 2 hours). New logins fail.\n\n## Conclusion\n\nProxmox VE's LDAP and Active Directory integration eliminates the overhead of managing Proxmox-specific passwords for every admin — the setup takes under 30 minutes once you have the bind account and base DN in hand. The discipline that makes it reliable over the long term is three things: a tested local admin account as a fallback, LDAPS with verified certificates in production, and scheduled sync jobs rather than ad-hoc manual runs. Once AD groups map to Proxmox roles, onboarding a new team member is as simple as adding them to the right security group — Proxmox picks it up on the next sync. To close down the remaining attack surface after enabling centralized auth, work through [SSH hardening, fail2ban, and the Proxmox firewall](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) as your next step.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-ldap-active-directory-authentication/",
            "title": "Proxmox VE LDAP and Active Directory Authentication",
            "summary": "Configure Proxmox VE to authenticate users via LDAP or Active Directory. Step-by-step realm setup, group sync, role mapping, and troubleshooting tips included.",
            "date_modified": "2026-05-06T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "ldap",
                "active-directory",
                "authentication",
                "access-control",
                "proxmox-ve"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-api-tokens-secure-automation/",
            "content_html": "\nIf you are using `root@pam` credentials in your Terraform provider, Ansible inventory, or curl scripts against the Proxmox API, you have a ticking time bomb buried in your config files. Proxmox VE 7+ introduced scoped API tokens that let you grant exactly the permissions an automation tool needs — and revoke them instantly if a secret leaks. By the end of this guide you will have a dedicated service user, a scoped token with its own ACL assignments, and a tested workflow that keeps root credentials out of every script, pipeline, and `.env` file.\n\n## Key Takeaways\n\n- **Token format**: Token IDs always follow `user@realm!token-name`; the secret is shown once at creation and never again\n- **Three CLI commands**: `pveum user add`, `pveum acl modify`, and `pveum user token add` is all it takes from the Proxmox shell\n- **Privilege separation**: Enable `--privsep 1` so the token's ACLs are scoped independently from the user's — narrower blast radius on leak\n- **PVE realm only**: Use `automation@pve` over `automation@pam` for service accounts — no Linux shadow entry means no pivot path to shell\n- **Revocation is instant**: `pveum user token remove` kills the token immediately with no grace period and no other credentials affected\n\n## Why Root Credentials in Automation Are a Real Risk\n\nMost quick-start tutorials reach for `root@pam` plus a password because it works immediately. Then that credential ends up in a Terraform `tfvars` file, a `.env` committed to a private repo, or an Ansible vault that three team members share. When you need to rotate the root password — or when someone leaves the team — you break every integration simultaneously.\n\nThe safer pattern is a dedicated service account with an API token scoped to exactly what the automation needs. If a token leaks, you run one command to revoke it. The root account and every other integration is untouched.\n\nWith Proxmox VE 9.1 running on most production and homelab nodes now, the per-token ACL system is mature and well-tested. This pairs naturally with the network and host-level controls covered in the [Proxmox firewall, fail2ban, and SSH hardening guide](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) — API token hygiene is the equivalent layer for the management plane.\n\n## Understanding the Proxmox Permission Model\n\nBefore creating tokens, understand the three-layer structure that controls what any principal can do:\n\n- **Users** — authenticated identities such as `root@pam` or `automation@pve`\n- **Roles** — named permission bundles such as `PVEVMAdmin`, `PVEAuditor`, or `Administrator`\n- **ACLs** — bindings of `(path, principal, role)` that express \"this user has this role at this path\"\n\nPaths are hierarchical. `/` covers the entire cluster, `/vms` covers all VMs and containers, `/vms/100` covers only VM 100, `/nodes/pve` covers only the node named `pve`, and `/storage/local-lvm` covers only that pool.\n\nAPI tokens layer on top of users. By default a token inherits all of its owning user's ACLs. If you enable **Privilege Separation**, the token's effective permissions become the intersection of the user's ACLs and the token's own ACLs — so you can restrict a token below what the user has, but never grant it more.\n\n| Privilege Separation | Token ACL | Effective Permission |\n|---|---|---|\n| Disabled | — | Identical to owning user |\n| Enabled | PVEAuditor on `/` | User's ACLs ∩ PVEAuditor on `/` |\n| Enabled | PVEVMAdmin on `/vms` | User's ACLs ∩ PVEVMAdmin on `/vms` |\n| Enabled | No ACL assigned | Zero permissions |\n\nThe intersection model means you must assign ACLs to both the user and the token when privilege separation is on.\n\n## How to Create a Dedicated Service User\n\nCreate the account from the Proxmox shell. Using the `pve` realm keeps it internal to Proxmox — no PAM entry, no shadow file, no SSH login path:\n\n```bash\n# Create a local PVE user — no password needed, token-only auth\npveum user add automation@pve --comment \"Terraform/Ansible service account\"\n```\n\nFor multi-system setups, create one user per automation system: `terraform@pve`, `ansible@pve`, `monitoring@pve`. That way revoking the Terraform token does not affect Ansible, and you can audit API activity per-system in the Proxmox task log.\n\n## Assign Roles with Least Privilege\n\nChoose roles that match what the automation actually does — not the widest role that makes it work:\n\n| Role | Grants |\n|---|---|\n| `PVEVMAdmin` | Full VM/container lifecycle: create, configure, start, stop, delete |\n| `PVEVMUser` | Start, stop, and console only — no create or delete |\n| `PVEDatastoreAdmin` | Manage storage, backups, snapshots, upload ISOs |\n| `PVEDatastoreUser` | Allocate disk space for VM disks only |\n| `PVEAuditor` | Read-only across all paths |\n| `PVESysAdmin` | Node-level operations: certs, services, network config |\n\nFor a Terraform workflow that provisions and destroys VMs:\n\n```bash\n# VM lifecycle access across all VMs\npveum acl modify /vms --users automation@pve --roles PVEVMAdmin\n\n# Disk allocation on the target storage pool\npveum acl modify /storage/local-lvm --users automation@pve --roles PVEDatastoreUser\n\n# ISO and template upload if Terraform manages cloud-init images\npveum acl modify /storage/local --users automation@pve --roles PVEDatastoreAdmin\n```\n\nFor an Ansible read-only audit pass before [automating full VM provisioning playbooks with Ansible](/articles/automate-proxmox-ansible-vm-playbooks/):\n\n```bash\npveum acl modify / --users automation@pve --roles PVEAuditor\n```\n\nFor a [K3s cluster on Proxmox VMs](/articles/k3s-kubernetes-cluster-proxmox-vms/) where Terraform manages both nodes and their storage:\n\n```bash\npveum acl modify /vms --users automation@pve --roles PVEVMAdmin\npveum acl modify /storage/local-lvm --users automation@pve --roles PVEDatastoreAdmin\n```\n\n## Create the API Token\n\nCreate the token with privilege separation enabled:\n\n```bash\npveum user token add automation@pve terraform --privsep 1 --comment \"Terraform provider 2026-05\"\n```\n\nProxmox prints a table with the token ID and secret:\n\n```\n┌──────────────────┬──────────────────────────────────────┐\n│ key              │ value                                │\n╞══════════════════╪══════════════════════════════════════╡\n│ full-tokenid     │ automation@pve!terraform             │\n│ info             │ {\"privsep\":\"1\"}                      │\n│ value            │ a1b2c3d4-e5f6-7890-abcd-ef1234567890 │\n└──────────────────┴──────────────────────────────────────┘\n```\n\nThe `value` field is the token secret. It is displayed **exactly once**. Store it in your secrets manager — HashiCorp Vault, Bitwarden Secrets, a GitHub Actions secret, or a locally-encrypted `.env` file — before closing the terminal. If you lose it, delete the token and create a new one.\n\nNow assign ACLs directly to the token. With privilege separation enabled, the token has zero permissions until you do this:\n\n```bash\n# Grant token access to VM management\npveum acl modify /vms \\\n  --users automation@pve \\\n  --tokens automation@pve!terraform \\\n  --roles PVEVMAdmin\n\n# Grant token access to the storage pool\npveum acl modify /storage/local-lvm \\\n  --users automation@pve \\\n  --tokens automation@pve!terraform \\\n  --roles PVEDatastoreUser\n```\n\nYou can scope the token more narrowly than the user. The user might have `/vms` admin access, but a specific token can be restricted to `/vms/100` through individual VM ACL assignments — useful for a deployment token that should only ever touch its own VMs.\n\n## Test the Token Before Wiring It Into Anything\n\nVerify the token works with a raw API call before touching Terraform or Ansible:\n\n```bash\nTOKEN_ID=\"automation@pve!terraform\"\nTOKEN_SECRET=\"a1b2c3d4-e5f6-7890-abcd-ef1234567890\"\nPROXMOX_HOST=\"192.168.1.10\"\n\ncurl -s -k \\\n  -H \"Authorization: PVEAPIToken=${TOKEN_ID}=${TOKEN_SECRET}\" \\\n  \"https://${PROXMOX_HOST}:8006/api2/json/nodes\" \\\n  | python3 -m json.tool\n```\n\nA successful response returns a JSON array of cluster nodes. A 401 means the token ID or secret is wrong. A 403 means the token authenticated but lacks permissions at that path — recheck your `pveum acl modify` commands and confirm the ACL was applied to the token ID, not just the user.\n\nTo list all current ACLs for debugging:\n\n```bash\npveum acl list\n```\n\n## Using the Token in Terraform\n\nThe `bpg/proxmox` Terraform provider accepts API tokens natively. Configure your provider block:\n\n```hcl\nterraform {\n  required_providers {\n    proxmox = {\n      source  = \"bpg/proxmox\"\n      version = \"~> 0.66\"\n    }\n  }\n}\n\nprovider \"proxmox\" {\n  endpoint  = \"https://192.168.1.10:8006/\"\n  api_token = var.proxmox_api_token\n  insecure  = false\n}\n```\n\nIn `terraform.tfvars` — add this file to `.gitignore` before your first commit:\n\n```hcl\nproxmox_api_token = \"automation@pve!terraform=a1b2c3d4-e5f6-7890-abcd-ef1234567890\"\n```\n\nOr export it as an environment variable so it never touches the filesystem at all:\n\n```bash\nexport TF_VAR_proxmox_api_token=\"automation@pve!terraform=a1b2c3d4-e5f6-7890-abcd-ef1234567890\"\n```\n\n## Using the Token in Ansible\n\nThe `community.general.proxmox_kvm` module accepts token credentials directly:\n\n```yaml\n- name: Provision a VM\n  community.general.proxmox_kvm:\n    api_host: 192.168.1.10\n    api_user: automation@pve\n    api_token_id: terraform\n    api_token_secret: \"{{ proxmox_token_secret }}\"\n    node: pve\n    name: my-vm\n    cores: 2\n    memory: 2048\n    state: present\n```\n\nStore `proxmox_token_secret` in an Ansible Vault file, not in a plaintext `group_vars` file. Use `ansible-vault encrypt_string` to inline-encrypt the value if you prefer a single-file setup over a separate vault.\n\n## Rotating and Revoking Tokens\n\nList all tokens for a user:\n\n```bash\npveum user token list automation@pve\n```\n\nRevoke a token immediately — effective in seconds, no grace period:\n\n```bash\npveum user token remove automation@pve terraform\n```\n\nFor a zero-downtime rotation, create the replacement first, then remove the old one:\n\n```bash\n# 1. Create replacement token\npveum user token add automation@pve terraform-v2 --privsep 1 --comment \"Rotated 2026-05-05\"\n\n# 2. Assign ACLs to the new token\npveum acl modify /vms \\\n  --users automation@pve \\\n  --tokens automation@pve!terraform-v2 \\\n  --roles PVEVMAdmin\npveum acl modify /storage/local-lvm \\\n  --users automation@pve \\\n  --tokens automation@pve!terraform-v2 \\\n  --roles PVEDatastoreUser\n\n# 3. Update secrets manager, test new token, update all consumers\n# 4. Remove old token\npveum user token remove automation@pve terraform\n```\n\nThe gotcha here: token ACLs are attached to the token ID, not the user. When you delete and recreate a token — even with the identical name — all ACLs are gone and must be reassigned from scratch. Keep a documented list of which ACLs each token needs, or script the reassignment as part of your rotation runbook so you do not rediscover this at 2am.\n\n## Why the PVE Realm Is Safer for Service Accounts\n\nA `automation@pve` user has no Linux system account. There is no `/etc/passwd` entry, no shadow file entry, and no ability to SSH into the Proxmox host using those credentials directly. If the API token is compromised, the attacker has API access scoped to the token's ACLs — they cannot pivot to an interactive shell on the hypervisor.\n\nA `automation@pam` user maps to a real Linux account that does have a shadow entry. Depending on your SSH configuration, that user may be able to authenticate to the host directly. The [LOLPROX analysis of Proxmox hypervisor exploit paths](/articles/lolprox-protecting-proxmox-from-hypervisor-exploits/) makes this point clearly: reducing the identity attack surface at the API layer is one of the highest-leverage hardening steps available.\n\nUse the PVE realm for every service account. Reserve PAM accounts for humans who need both shell access and web UI access.\n\n## Conclusion\n\nReplacing root credentials with scoped API tokens is a 20-minute change that pays back every time you rotate a secret, offboard a team member, or recover from a credential leak without touching your root account. Create `automation@pve`, assign narrowly-scoped ACLs, enable privilege separation on the token, store the secret in a proper secrets manager, and document the ACL assignments for your rotation runbook. The natural next step is putting this token to work in a structured Ansible workflow — the [Ansible VM automation guide](/articles/automate-proxmox-ansible-vm-playbooks/) walks through a production-ready inventory approach that pairs directly with what you set up here.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-api-tokens-secure-automation/",
            "title": "Proxmox API Tokens for Secure Automation Without Root",
            "summary": "Stop putting root credentials in Terraform and Ansible. Learn to create scoped Proxmox API tokens with least-privilege ACLs you can revoke in seconds.",
            "date_modified": "2026-05-05T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "api-tokens",
                "security",
                "automation",
                "proxmox-api",
                "least-privilege"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-lxc-resource-limits/",
            "content_html": "\nSetting resource limits on Proxmox LXC containers is one of those tasks that pays dividends the first time a bulk backup job, a Nextcloud sync, or a rogue cron script saturates your host. By the end of this guide you'll know exactly which `pct` options map to which cgroup v2 controls, how to apply most of them live without a container restart, and which edge cases catch people off guard on Proxmox VE 9.1.\n\n## Key Takeaways\n\n- **CPU limit vs. CPU units**: `--cpulimit` caps absolute CPU time (2.0 = max 2 physical core-equivalents); `--cpuunits` controls relative scheduling priority between containers under contention.\n- **Memory is a hard wall**: When a container hits its `--memory` ceiling, the OOM-killer fires — not graceful throttling. Set limits with headroom.\n- **Live application**: `--cpulimit`, `--memory`, and `--swap` take effect immediately with `pct set` — no container restart needed.\n- **cgroups v2 paths changed**: Proxmox VE 8+ uses the unified cgroups v2 hierarchy. I/O throttling uses `io.max`, not the v1 `blkio.throttle.*` paths you'll find in older tutorials.\n- **Measure before you cap**: `pct monitor <ctid>` shows real-time consumption; tune from data, not guesses.\n\n## How Proxmox LXC Resource Controls Work Under the Hood\n\nLXC containers on Proxmox are namespaced processes sharing the host kernel — no hypervisor overhead, but also no hardware isolation. Resource limits are enforced entirely by Linux cgroup v2. When you run `pct set 101 --cpulimit 2`, Proxmox writes to `/sys/fs/cgroup/lxc/101/cpu.max` and the kernel scheduler does the rest.\n\nEach container gets its own cgroup slice. On Proxmox VE 9.1 you can inspect the full hierarchy:\n\n```bash\n# Inspect the cgroup tree for container 101\nsystemd-cgls /sys/fs/cgroup/lxc/101\n```\n\nThree resource classes matter for most workloads:\n\n1. **CPU** — absolute quota and relative scheduling weight\n2. **Memory** — hard ceiling, swap budget, and soft pressure hints\n3. **Block I/O** — bandwidth and IOPS caps per device\n\nAll three can be configured persistently via `pct set`. Changes survive reboots because Proxmox writes them back to `/etc/pve/lxc/<ctid>.conf`. The one exception is I/O throttling — direct cgroup writes don't survive container restarts, which is why hook scripts matter.\n\n## How to Set CPU Limits on LXC Containers\n\n### The Difference Between Cores, CPU Limit, and CPU Units\n\nThese three settings look similar in the Proxmox UI but control completely different scheduler knobs:\n\n| Option | cgroup v2 knob | What it actually does |\n|---|---|---|\n| `--cores N` | `cpuset.cpus` | Container sees N vCPU threads |\n| `--cpulimit N` | `cpu.max` | Hard cap: N × 100% of one physical core |\n| `--cpuunits N` | `cpu.weight` | Relative scheduling priority (default: 1024) |\n\n`--cpulimit` is the ceiling that actually prevents CPU saturation. Setting it to `2.0` means the container can consume at most 200% CPU — two full core-equivalents of wall-clock time — regardless of how many cores the host has or how many the container can see via `--cores`.\n\n```bash\n# Cap container 101 to 1.5 cores worth of CPU — applies immediately\npct set 101 --cpulimit 1.5\n\n# Lower scheduling priority of a bulk-processing background container\npct set 102 --cpuunits 256\n\n# Pin a latency-sensitive container to 2 threads with a matching hard cap\npct set 103 --cores 2 --cpulimit 2\n```\n\n**Gotcha from experience**: On a 16-core host with four containers each set to `--cores 4`, all four can simultaneously peg all their cores if no `--cpulimit` is configured. I watched a Jellyfin container doing 4K transcodes bring a database container to its knees this way. Always pair `--cores` with `--cpulimit` for workloads you don't fully trust.\n\n### Verifying the CPU Limit Took Effect\n\n```bash\n# cpu.max format is: quota period (in microseconds)\n# 150000 100000 = 150ms per 100ms window = 1.5 cores\ncat /sys/fs/cgroup/lxc/101/cpu.max\n\n# Or read directly from the Proxmox config\npct config 101 | grep cpu\n```\n\nThe `cpu.stat` file shows cumulative throttled time — useful for confirming a container is actually hitting its limit:\n\n```bash\ncat /sys/fs/cgroup/lxc/101/cpu.stat | grep throttled\n# throttled_usec 14230891  ← non-zero means the cap is being enforced\n```\n\n## Configuring Memory and Swap in Proxmox LXC\n\nMemory configuration uses two knobs that work together:\n\n```bash\n# Give container 105 2 GB RAM and 512 MB swap\npct set 105 --memory 2048 --swap 512\n```\n\nThe part that trips people up: `--swap` is **additive**, not a total budget. The container above gets 2048 MB RAM **plus** 512 MB of swap space on top — 2560 MB of total virtual memory. If you set `--memory 2048 --swap 0`, the container has exactly 2048 MB and no swap at all.\n\nFor latency-sensitive workloads like databases, disable swap entirely:\n\n```bash\n# Database container: 4 GB hard limit, no swap, no latency spikes from swapping\npct set 106 --memory 4096 --swap 0\n```\n\n### Why the OOM-Killer Fires Instead of Throttling\n\nWhen a container hits its memory ceiling, the Linux kernel doesn't pause it or throttle allocations — it runs the OOM-killer and terminates the process with the highest `oom_score` inside that cgroup. For stateless services (nginx, Redis with `maxmemory` set, Prometheus), this is usually survivable. For PostgreSQL or any workload with a write-ahead log, it can corrupt data mid-write.\n\nThe practical rule: set `--memory` at least 20-30% above your measured working set, and monitor `memory.current` before tightening:\n\n```bash\n# Current RSS for container 106, human-readable\ncat /sys/fs/cgroup/lxc/106/memory.current | numfmt --to=iec\n# Example output: 1.8G\n```\n\n### Soft Memory Pressure with memory.low\n\nProxmox doesn't expose a soft memory limit in the UI, but cgroup v2's `memory.low` knob is available directly. Writing to it tells the kernel to evict other containers' pages before touching this container's working set under host memory pressure:\n\n```bash\n# Protect container 105's working set below 1.5 GB from host reclaim\necho $((1536 * 1024 * 1024)) > /sys/fs/cgroup/lxc/105/memory.low\n```\n\nThis is a hint, not a guarantee — but it meaningfully improves behavior when you're running a mix of critical services and background batch jobs on the same host.\n\n## Disk I/O Throttling: The Feature the Proxmox UI Skips\n\nPer-container I/O throttling is absent from the Proxmox VE 9.1 web interface, but cgroup v2's `io.max` interface is fully functional. Without it, a container running a bulk `rsync` or a backup agent can saturate your storage bus and cause latency spikes in every other container and VM — the kind of thing that's hard to diagnose after the fact.\n\n```bash\n# Find the major:minor device number of your storage pool's block device\nlsblk -no MAJ:MIN /dev/nvme0n1\n# Example output: 259:0\n\n# Cap container 101 to 50 MB/s read, 30 MB/s write, 3000 read IOPS, 1000 write IOPS\necho \"259:0 rbps=52428800 wbps=31457280 riops=3000 wiops=1000\" \\\n    > /sys/fs/cgroup/lxc/101/io.max\n```\n\nDirect cgroup writes vanish on container restart. Persist them with a hook script:\n\n```bash\n# Add this line to /etc/pve/lxc/101.conf\nhookscript: local:snippets/iolimit-101.sh\n```\n\n```bash\n#!/bin/bash\n# /var/lib/vz/snippets/iolimit-101.sh\nCTID=$1\nPHASE=$2\n\nif [ \"$PHASE\" = \"post-start\" ]; then\n    sleep 1  # cgroup needs a moment to initialize\n    DEVNO=$(lsblk -no MAJ:MIN /dev/pve/vm-${CTID}-disk-0 2>/dev/null || echo \"259:0\")\n    echo \"${DEVNO} rbps=52428800 wbps=31457280 riops=3000 wiops=1000\" \\\n        > /sys/fs/cgroup/lxc/${CTID}/io.max\nfi\n```\n\n```bash\nchmod +x /var/lib/vz/snippets/iolimit-101.sh\n```\n\n**Important gotcha**: `io.max` applies to all block I/O the container generates — including reads and writes that go through bind mounts from the host filesystem. If you're running [Docker inside an LXC container on Proxmox](/articles/docker-inside-lxc-containers-proxmox/), all Docker layer pulls and container writes count against the same limit. A 30 MB/s write cap will throttle your Docker image builds too. Size your limits with that in mind.\n\n**Older tutorials use the wrong paths**: cgroups v1 used `blkio.throttle.read_bps_device` and `blkio.throttle.write_bps_device`. Those paths don't exist on Proxmox VE 9.x. If a guide shows those paths, it was written for Proxmox VE 7 or older.\n\n## Monitoring Real-Time Resource Usage\n\nBefore setting any limits, spend a few minutes under real workload watching actual consumption. The Proxmox web UI averages over 60-second windows and will miss short bursts entirely.\n\n```bash\n# Real-time stats for container 101, refreshes every second\npct monitor 101\n```\n\nFor a host-wide snapshot of all running containers:\n\n```bash\nfor CTID in $(pct list | awk 'NR>1 && $2==\"running\" {print $1}'); do\n    echo \"=== CT ${CTID} ===\"\n    printf \"  CPU throttled_usec: \"\n    awk '/throttled_usec/ {print $2}' /sys/fs/cgroup/lxc/${CTID}/cpu.stat 2>/dev/null\n    printf \"  Memory current: \"\n    cat /sys/fs/cgroup/lxc/${CTID}/memory.current 2>/dev/null | numfmt --to=iec\ndone\n```\n\nFor I/O accounting, `io.stat` gives cumulative bytes and operations per device:\n\n```bash\ncat /sys/fs/cgroup/lxc/101/io.stat\n# 259:0 rbytes=2048000000 wbytes=512000000 rios=150000 wios=40000 ...\n```\n\nA non-zero and growing `throttled_usec` in `cpu.stat` confirms a CPU limit is actively being enforced. If you're not seeing throttling but the container still feels slow, the bottleneck is elsewhere — check I/O wait with `iostat -x 1` on the host.\n\n## Applying Limits in Bulk via the Proxmox API\n\nManaging limits one container at a time with `pct set` is fine for a handful of containers. For a homelab with a dozen LXCs, or a production cluster with many more, `pvesh` handles it cleanly:\n\n```bash\n# Apply a \"low-priority background\" profile to containers 200 through 209\nfor CTID in $(seq 200 209); do\n    pvesh set /nodes/pve/lxc/${CTID}/config \\\n        --cpulimit 0.5 \\\n        --cpuunits 256 \\\n        --memory 512 \\\n        --swap 256\ndone\n```\n\n`pvesh` takes the same parameters as `pct set` and works identically — the difference is that `pvesh` talks to the Proxmox REST API directly, so you can run it remotely or embed it in CI pipelines. If your infrastructure is already managed as code with [Ansible playbooks for Proxmox](/articles/automate-proxmox-ansible-vm-playbooks/), the `community.general.proxmox` module accepts `cpus`, `cpuunits`, `memory`, and `swap` as task parameters — same idempotent workflow, no custom scripting required.\n\n## Common Pitfalls When Setting LXC Resource Limits\n\n**`--cpulimit` throttles even on an idle host.** Once set, the kernel enforces the CPU cap regardless of whether other containers are competing. If you have a weekly report generator that needs to run fast, use `--cpuunits` to raise its priority instead — that only activates under contention, not when the host is idle.\n\n**Swap on ZFS volumes doubles ARC pressure.** If your LXC root disk is a zvol on a ZFS pool, container swap I/O goes through ZFS, which creates its own ARC churn. For ZFS-backed containers, `--swap 0` is almost always correct — compensate with sufficient `--memory` instead.\n\n**Limits don't cover in-kernel work done on behalf of the container.** If your container generates heavy NFS traffic or triggers ZFS prefetch, those kernel threads run outside the container's cgroup. You can see a container's CPU limit enforced at 1.0 while the host shows high `%sys` from ksoftirqd or nfsd. This is expected kernel behavior with no simple workaround — it's a characteristic of the container model, not a bug in your configuration.\n\n**Resource limits are also a security boundary.** An unprivileged LXC with no CPU or memory cap can run a trivial fork bomb and degrade every other tenant on the host. Setting conservative defaults — even for containers you trust — is a meaningful layer of defense that works alongside the network and access controls covered in [Hardening Proxmox VE: Firewall, fail2ban, and SSH Security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/). This is worth it any time you run more than two or three containers on a host.\n\n## Recommended Baseline Settings by Workload\n\nStart from these values and adjust after a week of `pct monitor` observation:\n\n| Workload | `--cpulimit` | `--cpuunits` | `--memory` | `--swap` |\n|---|---|---|---|---|\n| Web server (nginx/Caddy) | 1.0 | 1024 | 512 MB | 256 MB |\n| Database (Postgres/MariaDB) | 2.0 | 2048 | 2048 MB | 0 |\n| Monitoring (Prometheus) | 1.0 | 768 | 1024 MB | 512 MB |\n| Media server (Jellyfin) | 4.0 | 512 | 2048 MB | 1024 MB |\n| Backup agent (restic/borgmatic) | 0.5 | 128 | 256 MB | 256 MB |\n| Dev/test (low priority) | 0.5 | 256 | 512 MB | 512 MB |\n\nThe `--cpuunits` values are relative — only their ratios matter. A container at 2048 gets twice the scheduler slices of one at 1024 during CPU contention. On an idle host, both run unrestricted regardless of their `--cpuunits` value.\n\nThe backup agent row deserves special attention: backup containers are the most common culprit for host-wide slowdowns. Capping a restic or borgmatic container to 0.5 cores and slow I/O means [your automated Proxmox Backup Server jobs](/articles/automated-backups-proxmox-backup-server/) finish a bit later — but your production containers stay responsive throughout the backup window.\n\n## Conclusion\n\nWith CPU limits, memory ceilings, and I/O throttling in place, LXC containers become proper tenants on your Proxmox host rather than free-range processes competing for the same resources. The combination of `pct set` for persistent CPU and memory configuration and hook scripts for I/O throttling covers everything the web UI doesn't expose — and most of it applies live without touching a running workload. The logical next step is measuring the impact over time: wire up per-container cgroup metrics in your monitoring stack so you can see when a container is chronically hitting its CPU quota and needs its limit raised, or when it's consistently well under and the headroom can be reclaimed.\n\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-lxc-resource-limits/",
            "title": "Proxmox LXC Resource Limits: CPU, Memory, and Disk I/O",
            "summary": "Set CPU, memory, and disk I/O limits on Proxmox LXC containers using cgroups v2. Real pct commands, hook scripts, and hard-learned pitfalls — most apply live without a restart.",
            "date_modified": "2026-05-04T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "lxc",
                "cgroups",
                "resource-limits",
                "proxmox",
                "performance"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/automate-proxmox-ansible-vm-playbooks/",
            "content_html": "\nAnsible turns a Proxmox node — or a full three-node cluster — into reproducible, version-controlled infrastructure. By the end of this guide you'll have working playbooks that provision VMs and LXC containers on Proxmox VE 9.x using the `community.general` Ansible collection, a node-level configuration play you can run on every new host, and a structure you can commit to git and replay after a disaster.\n\n## Key Takeaways\n\n- **Collection to use**: `community.general` 8.x ships `proxmox_kvm` and `proxmox` modules covering full VM and LXC lifecycle\n- **Auth method**: API tokens (not root password) are the correct approach — scoped, revocable, and logged in the Proxmox audit trail\n- **Idempotency**: Both modules are idempotent; re-running a playbook will not clone duplicate VMs\n- **Cloud-Init VMs**: Combine `proxmox_kvm` with a Cloud-Init template for zero-touch VM deployment in under 90 seconds on NVMe storage\n- **Node config**: A separate OS-level play handles repo configuration, user accounts, and sysctl tuning on every new host automatically\n\n## Why Ansible Beats Clicking Through the Proxmox UI\n\nThe Proxmox web UI is excellent for one-off tasks — but the moment you're standing up a third K3s node or rebuilding a host after drive failure, manual UI work becomes a liability. You miss a CPU setting, forget to enable the QEMU guest agent, or pick the wrong storage pool. Ansible makes that impossible by turning your infrastructure into a YAML file you commit to git.\n\nThe practical payoff: I rebuilt an entire three-node homelab from scratch in about 20 minutes after a botched ZFS experiment, running a single `ansible-playbook site.yml`. Every VM came up with the right CPU topology, the right network bridge, and Cloud-Init pre-populated with my SSH keys.\n\nThat said, Ansible for Proxmox is not Terraform. It doesn't maintain remote state, so if you delete a VM manually and re-run the playbook, Ansible will try to create it again. For homelab scale this is fine. For production, combining Ansible with Terraform's Proxmox provider gives you both declarative provisioning and state tracking.\n\n## Prerequisites: Modules, API Tokens, and Inventory Setup\n\n### Installing the community.general Collection\n\n```bash\nansible-galaxy collection install community.general\n```\n\nYou need community.general 8.0.0 or later for the `proxmox_kvm` and `proxmox` (LXC) modules. Check your installed version:\n\n```bash\nansible-galaxy collection list | grep community.general\n```\n\nInstall the Python dependency on the machine running Ansible — not the Proxmox host itself:\n\n```bash\npip install proxmoxer requests\n```\n\n### Creating a Proxmox API Token\n\nNever store your root password in a playbook. Create a dedicated API token instead:\n\n```bash\npveum user add ansible@pve --comment \"Ansible automation\"\npveum role add AnsibleRole --privs \"Datastore.AllocateSpace,Datastore.Audit,Pool.Allocate,Sys.Audit,Sys.Console,Sys.Modify,VM.Allocate,VM.Audit,VM.Clone,VM.Config.CDROM,VM.Config.CPU,VM.Config.Cloudinit,VM.Config.Disk,VM.Config.HWType,VM.Config.Memory,VM.Config.Network,VM.Config.Options,VM.Migrate,VM.Monitor,VM.PowerMgmt,SDN.Use\"\npveum aclmod / -user ansible@pve -role AnsibleRole\npveum user token add ansible@pve automation --privsep 0\n```\n\nThat last command outputs the token secret — copy it now, you won't see it again. Store it in Ansible Vault immediately:\n\n```bash\nansible-vault create group_vars/all/vault.yml\n```\n\n```yaml\n# vault.yml (encrypted at rest)\nvault_proxmox_token_id: \"ansible@pve!automation\"\nvault_proxmox_token_secret: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n```\n\n### Inventory Structure\n\nA minimal inventory for a single-node setup:\n\n```ini\n[proxmox]\npve01 ansible_host=192.168.1.10\n\n[proxmox:vars]\nansible_user=root\nansible_ssh_private_key_file=~/.ssh/id_ed25519\n```\n\nFor the API-based Proxmox modules, `ansible_host` is used as `api_host`. Ansible calls the Proxmox REST API from your workstation — it does not SSH into the node to run `proxmox_kvm` tasks. SSH is only used for OS-level configuration plays.\n\n## How to Provision VMs with the proxmox_kvm Module\n\nThis playbook creates a Ubuntu 24.04 VM from a Cloud-Init template. It assumes you have a Cloud-Init-ready template at VMID 9000 — [setting up that base template is part of building a full private cloud on Proxmox](/articles/build-private-cloud-home-proxmox-ve/).\n\n```yaml\n# playbooks/create_vm.yml\n---\n- name: Provision Ubuntu VM on Proxmox\n  hosts: localhost\n  gather_facts: false\n  vars:\n    api_host: \"192.168.1.10\"\n    api_token_id: \"{{ vault_proxmox_token_id }}\"\n    api_token_secret: \"{{ vault_proxmox_token_secret }}\"\n    node: \"pve01\"\n    template_vmid: 9000\n    new_vmid: 101\n    vm_name: \"ubuntu-worker-01\"\n    vm_memory: 4096\n    vm_cores: 2\n    storage: \"local-lvm\"\n    ipconfig: \"ip=192.168.1.101/24,gw=192.168.1.1\"\n    ssh_keys: \"ssh-ed25519 AAAAC3Nz your@key\"\n\n  tasks:\n    - name: Clone VM from Cloud-Init template\n      community.general.proxmox_kvm:\n        api_host: \"{{ api_host }}\"\n        api_token_id: \"{{ api_token_id }}\"\n        api_token_secret: \"{{ api_token_secret }}\"\n        node: \"{{ node }}\"\n        name: \"{{ vm_name }}\"\n        vmid: \"{{ new_vmid }}\"\n        clone: \"{{ template_vmid }}\"\n        full: true\n        storage: \"{{ storage }}\"\n        timeout: 300\n        state: present\n\n    - name: Configure VM hardware and Cloud-Init\n      community.general.proxmox_kvm:\n        api_host: \"{{ api_host }}\"\n        api_token_id: \"{{ api_token_id }}\"\n        api_token_secret: \"{{ api_token_secret }}\"\n        node: \"{{ node }}\"\n        vmid: \"{{ new_vmid }}\"\n        memory: \"{{ vm_memory }}\"\n        cores: \"{{ vm_cores }}\"\n        ipconfig:\n          ipconfig0: \"{{ ipconfig }}\"\n        sshkeys: \"{{ ssh_keys }}\"\n        ciuser: \"ubuntu\"\n        update: true\n\n    - name: Start VM\n      community.general.proxmox_kvm:\n        api_host: \"{{ api_host }}\"\n        api_token_id: \"{{ api_token_id }}\"\n        api_token_secret: \"{{ api_token_secret }}\"\n        node: \"{{ node }}\"\n        vmid: \"{{ new_vmid }}\"\n        state: started\n```\n\nRun it:\n\n```bash\nansible-playbook playbooks/create_vm.yml --vault-password-file ~/.vault_pass\n```\n\nOn NVMe-to-NVMe (local-lvm to local-lvm on the same node), a full 32 GB clone completes in under 90 seconds. On HDD-backed storage, plan for 3-5 minutes and set `timeout` accordingly.\n\n### Deploying Multiple VMs with a Loop\n\nThe real payoff comes when you define your fleet in a variables file and loop over it:\n\n```yaml\n# vars/vms.yml\nvms:\n  - name: k3s-master-01\n    vmid: 101\n    ip: \"192.168.1.101\"\n    memory: 4096\n    cores: 2\n  - name: k3s-worker-01\n    vmid: 102\n    ip: \"192.168.1.102\"\n    memory: 8192\n    cores: 4\n  - name: k3s-worker-02\n    vmid: 103\n    ip: \"192.168.1.103\"\n    memory: 8192\n    cores: 4\n```\n\n```yaml\n- name: Clone and configure all VMs\n  community.general.proxmox_kvm:\n    api_host: \"{{ api_host }}\"\n    api_token_id: \"{{ api_token_id }}\"\n    api_token_secret: \"{{ api_token_secret }}\"\n    node: \"{{ node }}\"\n    name: \"{{ item.name }}\"\n    vmid: \"{{ item.vmid }}\"\n    clone: \"{{ template_vmid }}\"\n    full: true\n    storage: \"{{ storage }}\"\n    memory: \"{{ item.memory }}\"\n    cores: \"{{ item.cores }}\"\n    ipconfig:\n      ipconfig0: \"ip={{ item.ip }}/24,gw=192.168.1.1\"\n    sshkeys: \"{{ ssh_keys }}\"\n    ciuser: ubuntu\n    state: present\n    timeout: 300\n  loop: \"{{ vms }}\"\n```\n\nThree K3s nodes provisioned with one loop. Once they're up, [the K3s Kubernetes cluster setup guide on Proxmox](/articles/k3s-kubernetes-cluster-proxmox-vms/) picks up exactly where this leaves off.\n\n## Provisioning LXC Containers with the proxmox Module\n\nThe `community.general.proxmox` module handles LXC lifecycle. The interface differs from `proxmox_kvm` — you specify a template from a storage pool rather than cloning a VMID.\n\nFirst, download the template on the Proxmox node:\n\n```bash\npveam update\npveam available | grep debian-12\npveam download local debian-12-standard_12.7-1_amd64.tar.zst\n```\n\nThen the Ansible tasks:\n\n```yaml\n- name: Create Debian 12 LXC container\n  community.general.proxmox:\n    api_host: \"{{ api_host }}\"\n    api_token_id: \"{{ api_token_id }}\"\n    api_token_secret: \"{{ api_token_secret }}\"\n    node: \"{{ node }}\"\n    vmid: 200\n    hostname: \"monitoring-01\"\n    ostemplate: \"local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst\"\n    storage: local-lvm\n    disk: 8\n    memory: 1024\n    swap: 512\n    cores: 2\n    netif:\n      net0: \"name=eth0,bridge=vmbr0,ip=192.168.1.200/24,gw=192.168.1.1\"\n    password: \"{{ vault_lxc_root_password }}\"\n    pubkey: \"{{ ssh_keys }}\"\n    unprivileged: true\n    features:\n      - nesting=1\n    state: present\n\n- name: Start LXC container\n  community.general.proxmox:\n    api_host: \"{{ api_host }}\"\n    api_token_id: \"{{ api_token_id }}\"\n    api_token_secret: \"{{ api_token_secret }}\"\n    node: \"{{ node }}\"\n    vmid: 200\n    state: started\n```\n\nSetting `unprivileged: true` and `nesting=1` is the right default for containers that will run Docker — [running Docker inside LXC containers on Proxmox](/articles/docker-inside-lxc-containers-proxmox/) covers the additional `lxc.apparmor.profile` and keyctl settings you'll apply after the container first starts.\n\n**Gotcha**: If your Proxmox node uses a self-signed certificate, `proxmoxer` will refuse the API connection with a verification error. Install a proper TLS certificate or pass `validate_certs: false` in each module call for automation running entirely on a trusted internal network.\n\n## Automating Node-Level Configuration Over SSH\n\nAnsible shines at the OS layer too. This play handles the common Proxmox post-install tasks every node needs:\n\n```yaml\n# playbooks/configure_node.yml\n---\n- name: Configure Proxmox node base settings\n  hosts: proxmox\n  become: false\n\n  tasks:\n    - name: Disable enterprise repo\n      ansible.builtin.copy:\n        dest: /etc/apt/sources.list.d/pve-enterprise.list\n        content: |\n          # deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise\n\n    - name: Add no-subscription repo\n      ansible.builtin.apt_repository:\n        repo: \"deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription\"\n        state: present\n        filename: pve-no-subscription\n\n    - name: Update all packages\n      ansible.builtin.apt:\n        update_cache: true\n        upgrade: dist\n\n    - name: Set swappiness for VM host\n      ansible.posix.sysctl:\n        name: vm.swappiness\n        value: \"10\"\n        sysctl_file: /etc/sysctl.d/99-proxmox.conf\n        reload: true\n\n    - name: Install QEMU guest agent\n      ansible.builtin.apt:\n        name: qemu-guest-agent\n        state: present\n```\n\nFor SSH hardening, fail2ban, and Proxmox firewall rules, keep a separate `harden_node.yml`. Running it via Ansible means every new node gets an identical security baseline automatically — exactly the defense-in-depth approach that [hardening Proxmox with firewall, fail2ban, and SSH config](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) describes.\n\n## Structuring a site.yml That Ties Everything Together\n\nOnce you have individual playbooks, a top-level `site.yml` composes them in the correct order:\n\n```yaml\n# site.yml\n---\n- import_playbook: playbooks/configure_node.yml\n- import_playbook: playbooks/harden_node.yml\n- import_playbook: playbooks/create_vms.yml\n- import_playbook: playbooks/create_lxcs.yml\n```\n\nRun order matters. Configure and harden the node before creating workloads. If the node play reconfigures network bridges, a VM created before that step completes will start with no network interface.\n\n```bash\n# Dry run with diff output first\nansible-playbook site.yml --check --diff --vault-password-file ~/.vault_pass\n\n# Apply\nansible-playbook site.yml --vault-password-file ~/.vault_pass\n```\n\n## Common Pitfalls to Avoid\n\n**VMID conflicts**: If a playbook targets a VMID already in use, `proxmox_kvm` fails with a confusing API error rather than a clear message. Check `pvesh get /nodes/pve01/qemu` before assigning VMIDs in automation, or reserve a dedicated range above 200 exclusively for Ansible-managed workloads.\n\n**Clone timeout on slow storage**: The default `timeout` for `proxmox_kvm` is 30 seconds. A full clone to HDD-backed storage will time out and leave a partial VM. Set `timeout: 300` as a minimum — even NVMe-to-NVMe can push past 90 seconds for a 100 GB disk.\n\n**SSH host key collisions**: When a VM is rebuilt with the same IP, Ansible will refuse SSH because the host key changed. Add this to `ansible.cfg`:\n\n```ini\n[defaults]\nhost_key_checking = False\n```\n\nOr use the `known_hosts` module to explicitly clear stale entries before connecting.\n\n**API privilege scope**: The role above is broad — intentionally so for a homelab. For production, the minimum set for VM and LXC management is: `VM.Allocate`, `VM.Config.CPU`, `VM.Config.Memory`, `VM.Config.Disk`, `VM.Config.Network`, `VM.Config.Cloudinit`, `VM.PowerMgmt`, `Datastore.AllocateSpace`, `Datastore.Audit`, `SDN.Use`.\n\n**Proxmox VE 9.1 and community.general 9.0**: The `proxmox_kvm` module gained `scsi_discard` support and improved Cloud-Init disk handling in community.general 9.0. If you're on Proxmox VE 9.1 and see unexpected disk configuration behavior, upgrade the collection before debugging your playbook.\n\n## Conclusion\n\nWith these playbooks committed to git, standing up a new VM or LXC container on Proxmox takes one command and under two minutes — no UI clicks, no config drift, no forgotten settings. The logical next step is adding VLAN and bridge configuration to your node-level play (Proxmox's `ifupdown2` config in `/etc/network/interfaces` maps cleanly to Ansible's `template` module), then tagging your VMs with Proxmox pools so you can filter by environment in the dashboard. From there, your infrastructure is a pull request.\n",
            "url": "https://proxmoxpulse.com/articles/automate-proxmox-ansible-vm-playbooks/",
            "title": "Automate Proxmox VE with Ansible Full VM Playbooks",
            "summary": "Provision Proxmox VMs and LXC containers with Ansible using community.general and API tokens. Get repeatable, zero-touch VM deployments in under 90 seconds.",
            "date_modified": "2026-05-03T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "ansible",
                "automation",
                "infrastructure-as-code",
                "vm-management",
                "proxmox"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-backup-server-s3-storage-backend/",
            "content_html": "\nProxmox Backup Server 4.2 adds native S3-compatible object storage as a remote sync target, eliminating the need for a second PBS instance just to get backups off-site. Configure a remote once, point it at Backblaze B2, Wasabi, Cloudflare R2, or your own MinIO, and PBS handles chunk-level sync with full deduplication awareness. By the end of this guide you will have your backups replicating to object storage on a schedule with retention enforced at the S3 end, no extra hardware required.\n\n## Key Takeaways\n\n- **New remote type**: PBS 4.2 introduces a native `S3` backend under Remotes — no relay server needed\n- **Chunk-aware sync**: Only new 4 MB chunks transfer after the first run; unchanged data never re-uploads\n- **Any S3-compatible endpoint**: Backblaze B2, Wasabi, Cloudflare R2, MinIO, and AWS S3 all work with the same config\n- **Storage overhead**: Budget 15-25% more S3 usage than your local datastore due to manifests and index metadata\n- **Cost**: 1 TB of offsite retention runs $6-7/month on B2 or Wasabi; R2 charges zero egress fees\n\n## S3 Sync vs Running a Second PBS Instance\n\nBefore PBS 4.2, the standard offsite approach was pulling backups to a second PBS node via the built-in replication protocol. That works well — but it means a second machine, potentially a second enterprise subscription, and another piece of infrastructure to patch and monitor.\n\nS3 sync trades hardware cost for a monthly per-GB fee. Whether that is the right tradeoff depends on your datastore size:\n\n| | Second PBS Node | S3 Sync (PBS 4.2) |\n|---|---|---|\n| Hardware cost | $150–500+ one-time | $0 |\n| Monthly operating cost | Electricity (~$10–15) | $6–7/TB |\n| Restore speed | Full PBS API, fast | Pull chunks from S3 first |\n| Deduplication awareness | Full (native) | Chunk-level (native in 4.2) |\n| Disaster recovery | Needs second machine up | S3 is always available |\n| Break-even point | ~2–3 TB stored | Under 2 TB stored |\n\nFor homelab setups under 2 TB — and any small-business scenario where the second PBS machine sits idle most of the time — S3 sync is cheaper and simpler. Above 2 TB, the monthly S3 cost starts approaching the electricity cost of a dedicated machine.\n\nIf you are setting up PBS for the first time, the [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) guide covers the datastore and backup job fundamentals before you layer on S3 sync.\n\n## Prerequisites\n\nYou will need:\n\n- **PBS 4.2 or later** — run `proxmox-backup-manager version` to check; if on 4.1 or earlier, upgrade first:\n\n```bash\napt update && apt full-upgrade\n```\n\n- An account with Backblaze B2, Wasabi, Cloudflare R2, MinIO, or AWS S3\n- A bucket created in your chosen provider (covered below)\n- Network access from your PBS host to the S3 endpoint — no NAT hairpins, no intercepting proxies without `HTTP_PROXY` configured\n\nPBS 4.2 requires Debian 12 Bookworm as the base OS. If your PBS runs on Bullseye, the upgrade path requires a full OS upgrade before you can reach PBS 4.2.\n\n## Creating a Bucket and Access Keys\n\n### Backblaze B2\n\nB2 is the most common homelab choice at $6/TB/month with no minimum storage term and free egress up to 3x your stored data per day.\n\n1. Log into your B2 account and go to **Buckets → Create a Bucket**\n2. Name the bucket (e.g., `pbs-offsite-2026`) — this string appears in your endpoint URL\n3. Set **Files in Bucket** to **Private**\n4. Go to **App Keys → Add a New Application Key**\n5. Scope it to your bucket, enable **Read and Write**, and save the `keyID` and `applicationKey`\n\nB2's S3-compatible endpoint format — find your exact region on the bucket detail page:\n\n```\nhttps://s3.us-west-004.backblazeb2.com\n```\n\n### Cloudflare R2\n\nR2 charges zero egress fees, making it the right pick if you do frequent restores from S3. The first 10 GB of storage per month is free.\n\n1. In the Cloudflare dashboard go to **R2 → Create bucket**\n2. Choose a location hint near your PBS host\n3. Go to **R2 → Manage R2 API Tokens → Create API Token**, grant Object Read and Write scoped to your bucket\n4. Note your **Account ID** from the R2 overview page\n\nR2 endpoint format — substitute your 32-character account ID:\n\n```\nhttps://<ACCOUNT_ID>.r2.cloudflarestorage.com\n```\n\n### Wasabi\n\nWasabi matches B2 at $6.99/TB/month but enforces a **90-day minimum storage policy**. Deleting objects stored less than 90 days still bills for the full 90 days. Do not use Wasabi with prune schedules shorter than 90 days or you will pay for data you no longer hold.\n\nWasabi regional endpoint format:\n\n```\nhttps://s3.us-east-1.wasabisys.com\n```\n\n## How to Add an S3 Remote in PBS 4.2\n\n### Via the Web UI\n\n1. Open the PBS web UI at `https://<pbs-ip>:8007`\n2. Navigate to **Configuration → Remotes**\n3. Click **Add** and select **Type: S3**\n4. Fill in the fields:\n   - **ID**: a short name (e.g., `b2-offsite`)\n   - **Endpoint**: your full S3 endpoint URL\n   - **Bucket**: your bucket name\n   - **Region**: the bucket's region string\n   - **Access Key**: your `keyID`\n   - **Secret Key**: your `applicationKey`\n5. Click **Test Connection** — PBS runs a list operation against the bucket and surfaces auth errors immediately before you save\n\n### Via CLI\n\n```bash\nproxmox-backup-manager remote create b2-offsite \\\n  --type s3 \\\n  --endpoint \"https://s3.us-west-004.backblazeb2.com\" \\\n  --bucket \"pbs-offsite-2026\" \\\n  --region \"us-west-004\" \\\n  --access-key \"your-keyID-here\" \\\n  --secret-key \"your-applicationKey-here\"\n```\n\nVerify the remote saved correctly:\n\n```bash\nproxmox-backup-manager remote list\n```\n\nExpected output:\n\n```\n┌─────────────┬──────┬───────────────────────────────────────────────┐\n│ name        │ type │ endpoint                                      │\n╞═════════════╪══════╪═══════════════════════════════════════════════╡\n│ b2-offsite  │ s3   │ https://s3.us-west-004.backblazeb2.com        │\n└─────────────┴──────┴───────────────────────────────────────────────┘\n```\n\n## Setting Up Sync Jobs and Retention\n\n### Creating the Sync Job\n\nVia the web UI:\n\n1. Go to **Datastore → \\<your-datastore\\> → Sync Jobs**\n2. Click **Add Sync Job**\n3. Set **Remote** to your S3 remote\n4. **Remote Store**: the namespace path to use in the S3 bucket — use your datastore name (e.g., `backups`)\n5. **Schedule**: `daily` or a cron expression like `0 2 * * *` for 2 AM daily\n6. **Remove Vanished**: enable this — it deletes S3 objects for snapshots that have been pruned locally\n\nVia CLI:\n\n```bash\nproxmox-backup-manager sync-job create daily-s3-sync \\\n  --store backups \\\n  --remote b2-offsite \\\n  --remote-store backups \\\n  --schedule \"0 2 * * *\" \\\n  --remove-vanished true\n```\n\nTrigger the first sync manually before relying on the schedule:\n\n```bash\nproxmox-backup-manager sync-job run daily-s3-sync\n```\n\nWatch progress under **Administration → Task History**. A 500 GB datastore over a 100 Mbit uplink expects the initial upload to complete in 2–4 hours. Incremental syncs after that transfer only new chunks — typically 2–8 GB/day for a homelab with 3–4 VMs running daily backups.\n\n### How Retention Propagates to S3\n\nPBS does not enforce retention directly on S3. Pruning happens locally first, then the `remove-vanished` flag propagates deletions on the next sync cycle. The sequence:\n\n1. Local prune job runs and removes old snapshots from the local datastore\n2. Next sync job runs with `remove-vanished: true`\n3. PBS compares the S3 object list against local state and deletes orphaned chunk files\n\nYour S3 bucket will lag local retention by up to one sync cycle — 24 hours at a daily schedule. On Wasabi specifically: a snapshot pruned at day 89 still triggers the full 90-day minimum charge because Wasabi sees an early deletion. Set your Wasabi prune schedule to keep at least 90 days minimum.\n\n## What PBS Actually Uploads to S3\n\nThis is where the most common misconception lives. PBS stores backups as content-addressed 4 MB chunk files. When syncing to S3, it uploads:\n\n- **Chunk files** (`.blob`) — deduplicated backup data, already compressed\n- **Snapshot manifests** (`.manifest`) — links each snapshot to its chunk list\n- **Index files** (`.fidx`, `.didx`) — mapping tables for file-level and block-device backups\n\nIt does **not** sync the task log or the searchable catalog. This means you can restore data from S3 on a fresh PBS instance by registering the S3 remote as a source datastore, but you will need to rebuild the catalog afterward:\n\n```bash\nproxmox-backup-client catalog rebuild --repository <user>@<pbs-host>:backups\n```\n\nThe 15–25% storage overhead estimate comes from these metadata files. A local datastore holding 800 GB of deduplicated backup data will occupy roughly 920 GB to 1 TB on S3.\n\n## Provider Cost Comparison\n\n| Provider | Storage/TB/month | Egress | Min. term | Best for |\n|---|---|---|---|---|\n| Backblaze B2 | $6.00 | Free (3x stored/day) | None | Low-restore-frequency homelabs |\n| Wasabi | $6.99 | Free | 90 days | Long-retention cold storage |\n| Cloudflare R2 | $15.00 | Free | None | Frequent restores, zero surprise bills |\n| MinIO (self-hosted) | Hardware only | Free | None | Air-gapped or LAN backup copies |\n| AWS S3 Standard | $23.00 | $0.09/TB | None | Avoid for homelab-scale volumes |\n\nFor a homelab with 2 TB stored, daily incrementals, and 30-day retention: B2 costs ~$12/month, Wasabi ~$14/month, R2 ~$30/month. R2's zero-egress advantage only makes financial sense if you are pulling hundreds of GB in restores per month. Most homelabs run few enough restores that B2 wins on total cost.\n\n## Securing Credentials and Encrypting the Datastore\n\nPBS stores S3 credentials in `/etc/proxmox-backup/remotes.cfg`, readable only by root. For most setups that is sufficient. If your PBS runs as a VM on a shared Proxmox host with other administrators, enabling datastore-level encryption means data pushed to S3 is client-side encrypted before it leaves your network — your S3 provider stores only ciphertext.\n\nEnable encryption in PBS: **Datastore → \\<name\\> → Encryption → Generate Key**. Store the key file somewhere other than the PBS host itself — a password manager or a dedicated secrets store. Lose the key and your S3 backups become permanently unrecoverable, so treat it with the same discipline as your SSH private keys.\n\nThe [Build a Private Cloud at Home with Proxmox VE](/articles/build-private-cloud-home-proxmox-ve/) guide covers the broader architecture behind layered backup strategies on Proxmox, including where PBS fits alongside Proxmox's native VM snapshot tooling. For the host-level hardening that protects PBS credentials in the first place, the [Hardening Proxmox VE: Firewall, fail2ban, and SSH Security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) guide is the companion read.\n\n## Troubleshooting Common Sync Errors\n\n**`AuthorizationHeaderMalformed`**\nThe region string does not match what the provider expects. B2 regions look like `us-west-004`, not `us-west-1`. Copy the exact region string directly from the bucket detail page in the B2 console.\n\n**`403 Forbidden` on test connection**\nYour API key lacks list permission on the bucket. For B2, verify the application key includes `listBuckets`, `readFiles`, and `writeFiles` capabilities. For R2, confirm the API token is scoped to the correct bucket.\n\n**Sync shows 0 bytes transferred**\nNormal if no new backups have been created since the last sync. PBS only transfers chunks absent from the remote. Confirm backups are running:\n\n```bash\nproxmox-backup-client snapshots --repository <user>@<pbs-host>:backups\n```\n\n**Initial sync is stuck at very low throughput**\nCheck for a rate limit under **Configuration → Bandwidth Limits** in the PBS web UI. Also check whether B2's free API tier cap (2,500 Class B operations per day) has been hit — the client applies exponential backoff when rate-limited. For large datastores, upgrade to a paid B2 API tier or spread the initial sync over several days using bandwidth throttling.\n\n## Conclusion\n\nPBS 4.2's S3 backend turns any S3-compatible bucket into a fault-tolerant off-site backup copy for $6–7/month per TB with no additional hardware. Set up the remote, create a daily sync job with `remove-vanished` enabled, and your local prune policy propagates to S3 automatically. The step most people skip: do a test restore from S3 before you need it — register your S3 bucket as a source on a temporary PBS instance or spare VM, browse the snapshots, and pull one back. That is the only confirmation that your offsite copy actually works.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-backup-server-s3-storage-backend/",
            "title": "Proxmox Backup Server 4.2 S3 Storage Backend Setup",
            "summary": "Configure PBS 4.2's S3 storage backend to sync backups to Backblaze B2, Wasabi, or Cloudflare R2. Covers sync jobs, retention propagation, encryption, and cost per TB.",
            "date_modified": "2026-05-02T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "proxmox-backup-server",
                "s3",
                "backup",
                "object-storage",
                "offsite-backup"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-ve-small-business-vmware-alternative/",
            "content_html": "\nIf your small business runs on VMware vSphere and you're watching the renewal invoice climb past what the infrastructure actually costs to run, Proxmox VE 9.1 is a credible path out. This guide maps the VMware features your ops team depends on to their Proxmox equivalents, then walks through the four configuration areas that matter most in a production environment: role-based access control, high availability, backups, and network segmentation. By the end, you'll know exactly what Proxmox delivers — and the two real gaps you need to plan around.\n\n## Key Takeaways\n\n- **Zero licensing cost**: Proxmox VE is free (AGPL-3.0); the optional enterprise subscription adds stable repos and Bugzilla support, not features.\n- **HA is built in**: Proxmox HA uses Corosync + fencing on commodity hardware — three nodes is the minimum for a stable quorum.\n- **Feature parity**: vMotion maps to live migration, vCenter maps to the PVE web UI, vSAN maps to Ceph, and vDS port groups map to VLAN-aware bridges or SDN VNets.\n- **Backup is first-party**: Proxmox Backup Server handles incremental, deduplicated backups without a third-party tool.\n- **Real gap**: There is no equivalent to VMware's Distributed Resource Scheduler (DRS) — load balancing is manual.\n\n## Why Small Businesses Are Leaving VMware\n\nBroadcom's February 2024 licensing overhaul eliminated perpetual vSphere licenses and moved everything to subscription bundles. The smallest tier — vSphere Foundation — starts at approximately $250 per core per year, with a 16-core minimum per CPU. Two sockets on a single server means 32 licensed cores: that's $8,000 per host per year before support. A five-server cluster that ran on a one-time $15,000 perpetual purchase now costs $40,000+ annually.\n\nProxmox VE's pricing is the inverse. The software is free. The enterprise repository subscription — which gives access to the stable `pve-enterprise` apt repo and Proxmox's bug tracker — costs €134 per socket per year. Most small businesses either pay this for their production nodes, or run the `pve-no-subscription` repo and accept a slightly less conservative update cadence. Either way, the licensing argument is decisive.\n\n## How Proxmox VE Compares to VMware vSphere Feature by Feature\n\nThis is the honest map, not the marketing version.\n\n| VMware Feature | Proxmox Equivalent | Notes |\n|---|---|---|\n| ESXi hypervisor | KVM/QEMU (Proxmox VE 9.1) | Full parity for Linux and Windows guests |\n| vCenter Server | Proxmox web UI + pvesh REST API | No Windows dependency |\n| vMotion (live migration) | `qm migrate --online` | Works without shared storage via NBD |\n| HA / FT | Proxmox HA Manager + Corosync | No Fault Tolerance (zero-downtime mirroring) |\n| vDS / NSX-T | Linux bridges + SDN VNets | SDN needs Open vSwitch for advanced routing |\n| vSAN | Ceph (built-in since PVE 5) | Requires 3+ nodes, 3+ OSDs per node |\n| VMFS / NFS datastores | LVM-thin, ZFS, NFS, iSCSI, Ceph RBD | All first-class in the storage panel |\n| vROps / DRS | No equivalent | Workload balancing is manual |\n| RBAC | pveum + realm-based permissions | AD/LDAP integration included |\n| VMware Tools | QEMU Guest Agent | Must be installed manually per guest |\n\nThe absence of DRS is the most meaningful gap for larger clusters. For 3-5 hosts with predictable workloads, manual migration is fine. For 15+ hosts with spiky load profiles, you will feel it.\n\n## What You Need Before You Start\n\nHardware minimums for a production cluster:\n\n- **Three physical servers** — Corosync quorum requires an odd number of votes; two-node clusters need an external quorum device and are fragile under any failure scenario.\n- **Dedicated cluster network** — A separate 1 GbE NIC for Corosync heartbeats keeps cluster traffic off your VM network. Use 10 GbE if you're running Ceph.\n- **Shared or replicated storage** — Ceph for high-availability storage across nodes, or ZFS replication paired with PBS for nodes with local NVMe.\n- **IPMI or iDRAC access** — Proxmox HA fencing needs a way to power-cycle unresponsive nodes. Without it, HA will not restart VMs from a failed host.\n\nIf you're starting from scratch rather than migrating, [installing Proxmox VE on any hardware](/articles/install-proxmox-ve-on-any-hardware/) covers ISO prep, BIOS/UEFI settings, and whether to use ext4 or ZFS for the root disk.\n\n## How to Set Up Role-Based Access Control\n\nProxmox ships with 15 built-in roles. For a small business, these four cover most scenarios:\n\n| Role | What It Can Do |\n|---|---|\n| Administrator | Full cluster access |\n| PVEVMAdmin | Create, configure, and delete VMs — no host management |\n| PVEVMUser | Start, stop, and access VM consoles only |\n| PVEAuditor | Read-only view of everything |\n\nCreate a group for VM operators, assign it a role scoped to `/vms`, and add users:\n\n```bash\n# Create the group\npveum group add vmops --comment \"VM Operators\"\n\n# Grant PVEVMAdmin on all VMs\npveum acl modify /vms --group vmops --role PVEVMAdmin\n\n# Create a local user and add to the group\npveum user add jsmith@pve --comment \"Jane Smith\"\npveum group member add vmops jsmith@pve\n```\n\nIf your company has Active Directory, connect it as an authentication realm:\n\n```bash\npveum realm add corp-ad \\\n  --type ad \\\n  --domain corp.local \\\n  --server 192.168.1.10 \\\n  --default 0 \\\n  --comment \"Corporate Active Directory\"\n```\n\nAD users log in as `username@corp-ad`. Assign them to the same groups with the same `pveum` commands — no separate role system to learn.\n\n## How to Configure Proxmox High Availability\n\n### Build the Cluster First\n\nCreate the cluster on your first node:\n\n```bash\npvecm create corp-cluster\n```\n\nJoin the remaining nodes (run on each additional server):\n\n```bash\npvecm add 10.0.1.101\n```\n\nVerify cluster health before doing anything else:\n\n```bash\npvecm status\n```\n\nYou need to see `Quorate: Yes`. A non-quorate cluster will not execute HA operations — adding VMs to HA on a broken cluster creates confusion, not safety.\n\n### Configure Fencing\n\nFencing is non-negotiable for HA. Without it, Proxmox will not restart a VM from a node it can't reach — correctly, because that VM might still be running and a second start would corrupt shared storage. For servers with IPMI or iDRAC:\n\n```bash\npvesh set /nodes/pve2/config \\\n  --ipmi_address 10.0.1.212 \\\n  --ipmi_user admin \\\n  --ipmi_password 'fencepassword'\n```\n\nFor servers without IPMI, use the kernel watchdog. Add this line to `/etc/pve/datacenter.cfg`:\n\n```ini\nfencing: watchdog-mux\n```\n\n### Add VMs to HA\n\nCreate an HA group defining which nodes can run the workload:\n\n```bash\nha-manager groupadd prod-vms --nodes pve1,pve2,pve3 --restricted 0\n```\n\nAdd a VM to HA management:\n\n```bash\nha-manager add vm:100 \\\n  --group prod-vms \\\n  --max_restart 3 \\\n  --max_relocate 1 \\\n  --state started\n```\n\nWith `max_restart 3` and `max_relocate 1`, Proxmox attempts three in-place restarts, then one migration to another node, then marks the service failed. Expect a 2-3 minute total fence-and-restart cycle for a 50 GB VM on shared NFS. Ceph with NVMe OSDs cuts this to under 90 seconds.\n\n## Backup Strategy with Proxmox Backup Server\n\nVMware's native backup story has always required third-party tools — Veeam, Nakivo, or Commvault. Proxmox Backup Server is a first-party solution with client-server deduplication that achieves 3:1 to 5:1 ratios on typical mixed-workload VMs. Run it on a dedicated machine or a separate VM on your cluster.\n\nOn the PBS machine, create a datastore:\n\n```bash\nproxmox-backup-manager datastore create corp-backups /mnt/backup-disk\n```\n\nCreate a service account for Proxmox to authenticate against:\n\n```bash\nproxmox-backup-manager user create pvebackup@pbs --password 'StrongBackupPass'\nproxmox-backup-manager acl update /datastore/corp-backups \\\n  --auth-id pvebackup@pbs \\\n  --role DatastoreBackup\n```\n\nGet the PBS server's TLS fingerprint — you'll need it when adding the storage in the Proxmox web UI:\n\n```bash\nproxmox-backup-manager fingerprint\n```\n\nIn the Proxmox web UI, go to Datacenter → Storage → Add → Proxmox Backup Server. Provide the PBS IP, fingerprint, datastore name, and the `pvebackup@pbs` credentials.\n\nSchedule nightly backups at 02:00 with 14-day retention:\n\n```bash\npvesh create /cluster/backup \\\n  --vmid 100,101,102,103 \\\n  --storage pbs-corp \\\n  --schedule \"0 2 * * *\" \\\n  --maxfiles 14 \\\n  --compress zstd \\\n  --mode snapshot\n```\n\nFor the full setup including verification jobs and retention policies, [automated backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) covers everything from initial PBS install through verifying backup integrity on a schedule.\n\n## Network Segmentation for Department Isolation\n\nSmall business networks typically need at minimum four segments: management, production, backup traffic, and DMZ. The cleanest way to handle this in Proxmox is a VLAN-aware bridge on each host.\n\nEdit `/etc/network/interfaces` on each node:\n\n```bash\nauto vmbr0\niface vmbr0 inet static\n    address 10.0.1.101/24\n    gateway 10.0.1.1\n    bridge-ports eno1\n    bridge-stp off\n    bridge-fd 0\n    bridge-vlan-aware yes\n    bridge-vids 2-4094\n```\n\nApply without rebooting:\n\n```bash\nifreload -a\n```\n\nAssign VMs to their department VLANs:\n\n```bash\n# Finance on VLAN 20\nqm set 101 --net0 virtio,bridge=vmbr0,tag=20\n\n# Production app server on VLAN 30\nqm set 102 --net0 virtio,bridge=vmbr0,tag=30\n\n# DMZ on VLAN 40\nqm set 103 --net0 virtio,bridge=vmbr0,tag=40\n```\n\nYour upstream switch must present the Proxmox-facing port as a trunk carrying all relevant VLANs. The complete bridge configuration — including trunk port setup and routing between segments via a firewall VM — is in [configuring VLANs on Proxmox with Linux bridges](/articles/configure-vlans-proxmox-linux-bridges/).\n\n## Common Gotchas Before You Go Live\n\n**QEMU Guest Agent is not installed automatically.** Without it, VM shutdowns from the UI rely on ACPI signals alone — expect 30-60 seconds of waiting, and snapshot quiescing will not work on running VMs.\n\n```bash\n# Debian / Ubuntu guests\napt install qemu-guest-agent\nsystemctl enable --now qemu-guest-agent\n```\n\nFor Windows guests, install from the VirtIO ISO and select the Guest Agent component during setup.\n\n**Windows 11 needs explicit EFI and TPM config.** VMware handles Secure Boot silently through vCenter policies. In Proxmox, you add the EFI disk and virtual TPM 2.0 manually:\n\n```bash\nqm set 110 \\\n  --bios ovmf \\\n  --efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=1 \\\n  --tpmstate0 local-lvm:1,version=v2.0\n```\n\nSkip this and you'll hit a `Windows requires a TPM 2.0` block mid-installation.\n\n**Two-node clusters need an external quorum device.** If you only have two servers at launch, add a quorum device on a third machine — a Raspberry Pi 4 works fine:\n\n```bash\npvecm qdevice setup 10.0.1.250\n```\n\nWithout it, losing one node takes down quorum and the surviving node fences itself.\n\n**The default install leaves security gaps.** The Proxmox installer enables root login and exposes the web UI on port 8006 with no rate limiting. Before connecting to production networks, work through the [Proxmox firewall, fail2ban, and SSH hardening guide](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) to lock down admin access, configure two-factor authentication, and restrict API tokens to specific paths.\n\n## When Proxmox Is Not the Right Answer\n\nBe honest about these cases before committing:\n\n- **You need VMware Fault Tolerance** — zero-RPO, sub-second mirrored failover. Proxmox HA has a 2-3 minute restart window. There is no FT equivalent.\n- **You have VMware-certified enterprise apps** — some Oracle and SAP configurations have support contracts that specify VMware. Running on KVM may void those agreements.\n- **Your team is VMware-certified and retraining costs are real** — the Proxmox CLI and permission model take about a week of hands-on time to internalize. For very small teams, that cost can flip the math.\n\nOutside these specific constraints, Proxmox VE handles general-purpose production workloads cleanly and without ongoing licensing overhead.\n\n## Conclusion\n\nProxmox VE 9.1 gives small businesses HA, RBAC, first-party backups, and VLAN segmentation at zero licensing cost — the migration effort is real, but the operational model is straightforward once you know the `pveum`, `pvecm`, and `ha-manager` tools. Plan a week of parallel testing before cutting over production workloads. If you're building the cluster fresh, start with [installing Proxmox VE on any hardware](/articles/install-proxmox-ve-on-any-hardware/), then return here to stand up the business-critical configuration.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-ve-small-business-vmware-alternative/",
            "title": "Proxmox VE for Small Business: A Free VMware Alternative",
            "summary": "Proxmox VE 9.1 replaces VMware vSphere for small business at zero cost. Compare features, configure HA, RBAC, and backups with step-by-step commands.",
            "date_modified": "2026-05-01T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "proxmox",
                "vmware",
                "high-availability",
                "rbac",
                "proxmox-backup-server"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-open-vswitch-vm-networking/",
            "content_html": "\nOpen vSwitch (OVS) gives Proxmox three capabilities the default Linux bridge cannot match: per-port VLAN access mode, port mirroring to a dedicated monitoring VM, and VXLAN overlay tunnels for multi-node flat networks. By the end of this guide you will have a working OVS bridge on Proxmox VE 9.1 with at least one VM assigned to a VLAN access port — and a clear picture of exactly when the added complexity pays off versus when to stay with Linux bridges.\n\n## Key Takeaways\n\n- **OVS advantage**: Per-port VLAN assignment, port mirroring, and VXLAN tunnels that Linux bridges do not support natively.\n- **Version**: Open vSwitch 3.3 ships in Proxmox VE 9.1's Debian 13 base — no third-party repo required.\n- **Top gotcha**: Always open a serial console (IPMI/iDRAC/iLO) before editing `/etc/network/interfaces` on a live node — misconfiguration kills SSH access instantly.\n- **SDN conflict**: Proxmox SDN and manual OVS configuration conflict on the same bridge — pick one approach per node.\n- **Simpler path**: For basic VLAN trunking only, [configuring VLANs on Proxmox with Linux bridges](/articles/configure-vlans-proxmox-linux-bridges/) is lower-risk and fully sufficient.\n\n## When OVS Is Worth the Complexity\n\nLinux bridges handle VLAN trunking and basic isolation well. Switch to OVS when you need at least one of these:\n\n- **Access-port VLAN assignment** — the virtual switch port drops traffic into a specific VLAN; the guest sees plain untagged Ethernet and needs no in-guest VLAN configuration\n- **Port mirroring** — copy all frames from a production VM's tap interface to an IDS or monitoring VM (Zeek, Suricata inline mode) without touching the production guest\n- **VXLAN between nodes** — L2 overlay tunnels for VMs on separate physical hosts without a full Ceph or shared storage fabric\n- **QoS policing** — rate-limit a specific VM's uplink at the virtual switch layer, not inside the guest\n\nIf none of those scenarios apply, stay with Linux bridges. OVS misconfiguration is the fastest way to lock yourself out of a remote node with no graceful recovery path short of a serial console or physical keyboard.\n\n## Installing Open vSwitch on Proxmox VE 9.1\n\nOVS 3.3 is in the Debian 13 main repository — no extra sources needed:\n\n```bash\napt update\napt install openvswitch-switch -y\n```\n\nVerify the daemon is running:\n\n```bash\nsystemctl status ovs-vswitchd\n```\n\nCheck the exact version:\n\n```bash\novs-vsctl --version\n# ovs-vsctl (Open vSwitch) 3.3.x\n```\n\nThe `ovs-vsctl` tool is your control plane for all bridge and port configuration. Unlike `brctl`, it writes to `ovsdb-server`, a persistent database that survives `ovs-vswitchd` restarts. Think of `ovsdb-server` as the single source of truth for your virtual switch topology.\n\n## How to Configure the OVS Bridge in /etc/network/interfaces\n\n**Open a serial console before you touch anything.** IPMI, iDRAC, iLO — whatever your hardware provides. A single typo in `/etc/network/interfaces` will take down the management IP and leave you with no SSH path back in. This is not a theoretical risk; it is how most OVS-on-Proxmox incidents start.\n\nHere is the minimal working configuration: one physical NIC (`enp3s0`) uplinked into an OVS bridge (`vmbr0`), with the Proxmox host management IP on the bridge itself.\n\n```ini\nauto lo\niface lo inet loopback\n\nauto enp3s0\niface enp3s0 inet manual\n    ovs_type OVSPort\n    ovs_bridge vmbr0\n\nauto vmbr0\niface vmbr0 inet static\n    address 192.168.1.100/24\n    gateway 192.168.1.1\n    dns-nameservers 1.1.1.1\n    ovs_type OVSBridge\n    ovs_ports enp3s0\n```\n\nApply without rebooting:\n\n```bash\nifreload -a\n```\n\nIf `ifreload` is not available on your node:\n\n```bash\nifdown vmbr0 enp3s0 && ifup enp3s0 vmbr0\n```\n\nVerify the bridge is up and the physical port is attached:\n\n```bash\novs-vsctl show\n```\n\nExpected output:\n\n```\nBridge vmbr0\n    Port enp3s0\n        Interface enp3s0\n    Port vmbr0\n        Interface vmbr0\n            type: internal\n```\n\nThe `type: internal` port is how the Proxmox host IP lives on the bridge — it is an in-kernel virtual port, not a separate tap device. If you see it, your management IP is on the OVS bridge and SSH should work normally.\n\n## Assigning VMs to VLAN Access Ports\n\nThis is where OVS earns its complexity overhead. Assign a specific VLAN tag to a VM's tap interface and the guest sees completely untagged Ethernet — zero guest-side VLAN configuration needed.\n\nIn the Proxmox GUI, create or edit the VM and attach its NIC to bridge `vmbr0` with no VLAN tag set. Then from the host shell:\n\n```bash\n# VM 100, first NIC creates tap interface tap100i0\novs-vsctl set port tap100i0 tag=20\n```\n\nConfirm the assignment:\n\n```bash\novs-vsctl list port tap100i0 | grep tag\n# tag                 : 20\n```\n\nFor a firewall or router VM that needs to receive multiple tagged VLANs — pfSense, OPNsense — configure a trunk port instead:\n\n```bash\novs-vsctl set port tap200i0 trunks=10,20,30\n```\n\n### Persisting VLAN Assignments Across Reboots\n\nOVS database entries survive `ovs-vswitchd` restarts but not full host reboots unless you persist them. The simplest approach for a homelab is `up` hooks in `/etc/network/interfaces`:\n\n```ini\nauto vmbr0\niface vmbr0 inet static\n    address 192.168.1.100/24\n    gateway 192.168.1.1\n    ovs_type OVSBridge\n    ovs_ports enp3s0\n    up ovs-vsctl set port tap100i0 tag=20 || true\n    up ovs-vsctl set port tap200i0 trunks=10,20,30 || true\n```\n\nThe `|| true` prevents the bridge bring-up from failing when a tap interface does not yet exist at boot (it will not — VMs start after networking). For larger setups with many VMs, a systemd oneshot service running after `pve-guests.target` is more reliable than per-interface hooks.\n\n## How to Mirror VM Traffic to a Monitoring VM\n\nScenario: VM 100 is a production web server, VM 300 runs Zeek for network traffic analysis. You want all of VM 100's traffic mirrored to VM 300's NIC without touching the web server.\n\n```bash\novs-vsctl \\\n  -- --id=@src get port tap100i0 \\\n  -- --id=@dst get port tap300i0 \\\n  -- --id=@mirror create mirror name=web-mirror \\\n       select-src-port=@src select-dst-port=@src \\\n       output-port=@dst \\\n  -- add bridge vmbr0 mirrors @mirror\n```\n\nVerify the mirror is active:\n\n```bash\novs-vsctl list mirror\n```\n\nInside VM 300, put the NIC in promiscuous mode and point Zeek at it. The mirrored frames arrive unmodified — no VLAN stripping, no encapsulation. Expect roughly 5–8% CPU overhead on the host under sustained traffic due to the frame duplication path.\n\nTo remove the mirror when done:\n\n```bash\novs-vsctl clear bridge vmbr0 mirrors\n```\n\n## Setting Up a VXLAN Overlay Between Two Proxmox Nodes\n\nVXLAN creates a virtual L2 segment over an existing L3 connection, letting VMs on two separate physical hosts share a broadcast domain. This is the lightweight alternative to a full Ceph fabric when you are [building a private cloud at home with Proxmox](/articles/build-private-cloud-home-proxmox-ve/) and want VM-to-VM flat networking without shared storage dependencies.\n\n**Configuration:**\n- Node A management IP: `192.168.1.100`\n- Node B management IP: `192.168.1.101`\n- VNI (VXLAN Network Identifier): `100`\n- Overlay bridge name: `vxbr0`\n\n**On Node A:**\n\n```bash\novs-vsctl add-br vxbr0\novs-vsctl add-port vxbr0 vxlan0 \\\n  -- set interface vxlan0 type=vxlan \\\n     options:remote_ip=192.168.1.101 \\\n     options:key=100 \\\n     options:dst_port=4789\n```\n\n**On Node B:**\n\n```bash\novs-vsctl add-br vxbr0\novs-vsctl add-port vxbr0 vxlan0 \\\n  -- set interface vxlan0 type=vxlan \\\n     options:remote_ip=192.168.1.100 \\\n     options:key=100 \\\n     options:dst_port=4789\n```\n\nVMs attached to `vxbr0` on either node are now on the same L2 segment. Ping between them to confirm. Expect 5–8% throughput loss versus native L2 on a 10 GbE link due to VXLAN encapsulation — acceptable for almost all services except high-frequency storage traffic.\n\nPersist the overlay bridge in `/etc/network/interfaces` on each node:\n\n```ini\nauto vxbr0\nallow-ovs vxbr0\niface vxbr0 inet manual\n    ovs_type OVSBridge\n\nallow-vxbr0 vxlan0\niface vxlan0 inet manual\n    ovs_type OVSPort\n    ovs_bridge vxbr0\n    ovs_options type=vxlan options:remote_ip=192.168.1.101 options:key=100 options:dst_port=4789\n```\n\n## OVS vs Linux Bridge: Feature Comparison\n\n| Feature | Linux Bridge | Open vSwitch |\n|---|---|---|\n| VLAN trunking | Yes | Yes |\n| Per-port VLAN (access mode) | Requires SDN or manual `ip link` hacks | Native |\n| Port mirroring | No | Yes |\n| VXLAN tunnels | Partial (`ip link add ... type vxlan`) | Native, composable |\n| QoS policing | Limited (`tc` only) | Built-in via OVS queue config |\n| Proxmox GUI VM attachment | Full | Full |\n| Port-level config | GUI | CLI only |\n| Configuration file | `/etc/network/interfaces` | OVS DB + `/etc/network/interfaces` |\n| Misconfiguration risk | Low | Higher — lockout possible |\n\nThe practical takeaway: Proxmox's GUI sees OVS bridges as ordinary bridges for VM attachment. You assign VMs normally in the GUI, then fine-tune port behavior from the CLI. That hybrid workflow is comfortable within a week of daily use.\n\n## Troubleshooting Common OVS Issues on Proxmox\n\n**OVS bridge does not come up after reboot.** Verify that `openvswitch-switch` starts before the `networking` service and is enabled:\n\n```bash\nsystemctl status openvswitch-switch\nsystemctl enable openvswitch-switch\n```\n\nIf the service starts too late relative to `networking.service`, add an explicit `After=openvswitch-switch.service` ordering to a networking drop-in under `/etc/systemd/system/networking.service.d/`.\n\n**Tap interfaces missing from OVS after VM restart.** Proxmox creates tap devices on VM start and destroys them on VM stop. Your `/etc/network/interfaces` `up` hooks fired at boot when no taps existed yet. Use the `|| true` idiom above, or write a QEMU hook script at `/etc/pve/qemu-server/hooks/` to apply port config each time a specific VM starts.\n\n**Management IP disappeared after switching to OVS.** The physical NIC stanza must be `inet manual` with the IP address only in the bridge stanza. Diagnose from the serial console:\n\n```bash\nip addr show enp3s0\nip addr show vmbr0\n```\n\nIf the IP is on the NIC instead of the bridge, edit `/etc/network/interfaces` from the serial console and run `ifreload -a`.\n\n**Proxmox SDN module conflicts with manual OVS.** If you have used Proxmox SDN on this node, it may attempt to manage the same bridge names. Use SDN exclusively or disable the SDN controller for this node — mixing manual OVS configuration with SDN on the same bridge produces unpredictable results that are difficult to debug remotely.\n\n## Hardening the OVS Configuration\n\nBy default, OVS enables STP on all bridges and is ready to accept an external OpenFlow controller connection. On a standalone homelab node you want neither:\n\n```bash\n# Disable STP on the management bridge\novs-vsctl set bridge vmbr0 stp_enable=false\n\n# Remove any external OpenFlow controller pairing\novs-vsctl del-controller vmbr0\n\n# Verify OVSDB is not listening on a network socket (empty output = correct)\novs-vsctl get-manager\n```\n\nIf `get-manager` returns a TCP address, remove it:\n\n```bash\novs-vsctl del-manager\n```\n\nFor host-level nftables firewall rules and SSH hardening that complement OVS, see [Hardening Proxmox VE: Firewall, fail2ban, and SSH Security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) — the host firewall rules apply identically whether you use Linux bridges or OVS underneath. For a broader look at hypervisor attack surface including virtual NIC escape vectors, [LOLPROX: Protecting Proxmox from Hypervisor Exploits](/articles/lolprox-protecting-proxmox-from-hypervisor-exploits/) covers the threat model in detail.\n\n## Conclusion\n\nOpen vSwitch on Proxmox VE 9.1 is the right tool when you need access-port VLAN assignment, port mirroring for an IDS VM, or VXLAN overlays between nodes — and it is overkill for everything else. The install is a single `apt install openvswitch-switch`, the configuration lives in the same `/etc/network/interfaces` file you already use, and the per-port CLI workflow becomes routine within an afternoon. The immediate next step: attach a firewall VM to `vmbr0` as a trunk port carrying VLANs 10, 20, and 30, then set your production workload VMs to access ports on their respective VLANs — that is where the isolation model fully clicks into place.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-open-vswitch-vm-networking/",
            "title": "Proxmox Open vSwitch Setup for Advanced VM Networking",
            "summary": "Set up Open vSwitch on Proxmox VE 9.1 for per-port VLAN access mode, port mirroring, and VXLAN overlay tunnels — step-by-step with real CLI commands.",
            "date_modified": "2026-04-30T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "open-vswitch",
                "networking",
                "vlans",
                "vxlan",
                "proxmox"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-homelab-essential-lxc-containers/",
            "content_html": "\nIf you're running Proxmox VE 9.1 and wondering what to put inside it, these six LXC containers cover the majority of homelab infrastructure needs: DNS filtering, SSL reverse proxying, password management, uptime monitoring, media serving, and identity management. Each runs on under 1 GB RAM, boots in seconds, and coexists on a single 8 GB node with plenty of headroom. By the end of this guide you'll have a homelab services stack that punches well above its weight.\n\n## Key Takeaways\n\n- **Lightweight**: Each container uses 128–512 MB RAM; all six together idle at around 1 GB combined.\n- **Unprivileged by default**: All six run as unprivileged LXC containers — Jellyfin needs two extra cgroup lines for hardware transcoding, nothing more.\n- **Order matters**: Deploy Pi-hole first so every subsequent container can use it for local DNS immediately.\n- **Docker optional**: Four of the six run as native systemd services; only NPM and Authentik benefit from Docker Compose.\n- **Minimum hardware**: A node with 8 GB RAM and 60 GB SSD handles the full stack comfortably with room for Proxmox backups.\n\n## Why LXC Instead of Full VMs for These Workloads\n\nLXC containers share the host kernel, which means sub-second boot times and near-zero overhead versus a full KVM VM. A Pi-hole VM running Debian idles at around 350 MB RAM just for the OS layer. The same workload in an LXC container uses 70 MB. For services that spend most of their life waiting — DNS resolvers, uptime checkers, password vaults — that gap is the difference between fitting six services on a 4 GB node or needing 16 GB.\n\nThe tradeoff: LXC containers share the host kernel, so a kernel-level exploit could theoretically affect other containers on the same host. For most homelab threat models this is acceptable. If you want the full isolation picture, the guide on [running Docker inside LXC containers on Proxmox](/articles/docker-inside-lxc-containers-proxmox/) covers exactly where the isolation boundary sits and when a full VM is warranted instead.\n\n## How to Create LXC Containers from the CLI\n\nAll six containers follow the same creation pattern. Pull the Debian 12 template first:\n\n```bash\npveam update\npveam download local debian-12-standard_12.7-1_amd64.tar.zst\n```\n\nThen create and start with `pct`:\n\n```bash\npct create 200 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \\\n  --hostname pihole \\\n  --memory 256 \\\n  --cores 1 \\\n  --net0 name=eth0,bridge=vmbr0,ip=192.168.1.10/24,gw=192.168.1.1 \\\n  --storage local-lvm \\\n  --rootfs local-lvm:4 \\\n  --unprivileged 1 \\\n  --start 1\n```\n\nAdjust `--memory`, `--rootfs`, the CT ID, and the IP for each service. The specs below are from a production homelab running Proxmox VE 9.1 — not theoretical minimums.\n\n## Container 1: Pi-hole — Network-Wide DNS Filtering\n\n**CT ID**: 200 | **RAM**: 256 MB | **Disk**: 4 GB | **IP**: 192.168.1.10\n\nPi-hole on Debian 12 in an unprivileged LXC is the foundation of the entire stack. Every other container and LAN client points to it for DNS, so deploy this one first.\n\n```bash\npct exec 200 -- bash -c \"apt update && apt install -y curl\"\npct exec 200 -- bash -c \"curl -sSL https://install.pi-hole.net | bash\"\n```\n\nAfter the installer exits, set the web admin password:\n\n```bash\npct exec 200 -- pihole -a -p yourpassword\n```\n\n**Gotcha**: Pi-hole's installer configures `eth0` as the listening interface and complains if the container's DNS already resolves to localhost. Before running the installer, check `/etc/resolv.conf` inside the container and temporarily set an upstream like `1.1.1.1`. Switch it back to `127.0.0.1` after Pi-hole is running. The admin UI lands at `http://192.168.1.10/admin`.\n\n## Container 2: Nginx Proxy Manager — SSL Reverse Proxy\n\n**CT ID**: 201 | **RAM**: 512 MB | **Disk**: 8 GB | **IP**: 192.168.1.11 (needs ports 80 and 443)\n\nNginx Proxy Manager gives you a web GUI for SSL termination, Let's Encrypt auto-renewal, and subdomain routing to internal services. This is the one container in the list where Docker Compose pays off — the NPM image is significantly easier to update than a manual nginx + certbot setup. For a broader look at managing Docker workloads on Proxmox, the guide on [managing Docker on Proxmox with Portainer and Dockge](/articles/managing-docker-on-proxmox-with-portainer-and-dockge/) covers the tooling that complements NPM.\n\nInside the container:\n\n```bash\napt update && apt install -y docker.io docker-compose-plugin\nmkdir -p /opt/npm && cd /opt/npm\n```\n\nCreate `/opt/npm/docker-compose.yml`:\n\n```yaml\nservices:\n  npm:\n    image: jc21/nginx-proxy-manager:2.12.1\n    restart: unless-stopped\n    ports:\n      - \"80:80\"\n      - \"443:443\"\n      - \"81:81\"\n    volumes:\n      - ./data:/data\n      - ./letsencrypt:/etc/letsencrypt\n```\n\n```bash\ndocker compose up -d\n```\n\nDefault credentials: `admin@example.com` / `changeme`. Change both immediately on first login at `http://192.168.1.11:81`.\n\n**Gotcha**: Pi-hole and NPM must be on different static IPs. They don't share ports, but assigning both to `192.168.1.10` is a common mistake that causes maddening DNS resolution failures. Give NPM its own IP from the start.\n\n## Container 3: Vaultwarden — Self-Hosted Password Manager\n\n**CT ID**: 202 | **RAM**: 256 MB | **Disk**: 4 GB | **IP**: 192.168.1.12\n\nVaultwarden is a Bitwarden-compatible server written in Rust. It handles the full Bitwarden client API — browser extensions, mobile apps, the desktop client — at under 25 MB resident memory at idle. The official Bitwarden server requires 2 GB RAM minimum; Vaultwarden replaces it entirely for personal or small-team use.\n\n```bash\napt update && apt install -y docker.io\n```\n\nGenerate an Argon2 admin token before starting the container:\n\n```bash\ndocker run --rm -it vaultwarden/server:1.32.0 /vaultwarden hash --preset owasp\n```\n\nCopy the `$argon2id$...` output, then start the container:\n\n```bash\ndocker run -d \\\n  --name vaultwarden \\\n  --restart unless-stopped \\\n  -e ADMIN_TOKEN='$argon2id$v=19$m=65540,t=3,p=4$YOURTOKEN' \\\n  -v /opt/vaultwarden/data:/data \\\n  -p 8080:80 \\\n  vaultwarden/server:1.32.0\n```\n\n**Gotcha**: Bitwarden clients require HTTPS. Vaultwarden over plain HTTP works only on localhost — mobile app syncs fail silently over HTTP on a LAN IP without any useful error message. You must proxy it through Nginx Proxy Manager with a valid Let's Encrypt cert before pointing any clients at it. Add a proxy host in NPM for `vault.yourdomain.com` → `192.168.1.12:8080` before importing any passwords.\n\n**Timing**: From container creation to first successful sync with the Bitwarden browser extension takes under 90 seconds once DNS and SSL are configured.\n\n## Container 4: Uptime Kuma — Service Monitoring Dashboard\n\n**CT ID**: 203 | **RAM**: 256 MB | **Disk**: 4 GB | **IP**: 192.168.1.13\n\nUptime Kuma monitors HTTP endpoints, TCP ports, DNS records, and ping targets, then alerts you via Telegram, Discord, SMTP, or webhooks when something goes down. It also generates a public status page — useful if you're running services for family members or a small team.\n\n```bash\napt update && apt install -y docker.io\ndocker run -d \\\n  --name uptime-kuma \\\n  --restart unless-stopped \\\n  -v /opt/uptime-kuma:/app/data \\\n  -p 3001:3001 \\\n  louislam/uptime-kuma:1.23.16\n```\n\nThe web UI is at `http://192.168.1.13:3001`. Add monitors for Pi-hole, NPM, Vaultwarden, and anything else you've deployed — the whole point of this container is to catch failures before your users do.\n\n**Gotcha**: Uptime Kuma stores everything in SQLite. If the 4 GB rootfs fills up — which happens if you enable verbose logging and forget about it — the database stops writing and you lose monitoring history with no obvious error in the UI. Check disk usage monthly with `df -h` inside the container and make sure Proxmox has a backup job scheduled for this CT.\n\n## Container 5: Jellyfin — Self-Hosted Media Server\n\n**CT ID**: 204 | **RAM**: 1024 MB | **Disk**: 8 GB root + media bind mount | **IP**: 192.168.1.14\n\nJellyfin is the only container in this list that needs more than 512 MB RAM — library scanning on large collections briefly spikes to 1.2 GB. The rootfs stays lean at 8 GB because the actual media lives on the host or NAS, mounted into the container as a bind mount.\n\nAdd the bind mount in `/etc/pve/lxc/204.conf` before starting the container:\n\n```ini\nmp0: /mnt/nas/media,mp=/media\n```\n\nThen install Jellyfin 10.10.x (current stable as of April 2026):\n\n```bash\ncurl https://repo.jellyfin.org/install-debuntu.sh | bash\nsystemctl enable --now jellyfin\n```\n\nFor **Intel Quick Sync hardware transcoding** in an unprivileged LXC, add these lines to `/etc/pve/lxc/204.conf`:\n\n```ini\nlxc.cgroup2.devices.allow: c 226:0 rwm\nlxc.cgroup2.devices.allow: c 226:128 rwm\nlxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir\n```\n\nThen add the `jellyfin` user to the `render` and `video` groups inside the container and restart:\n\n```bash\nusermod -aG render,video jellyfin\nsystemctl restart jellyfin\n```\n\n**Gotcha**: In an unprivileged LXC, the `jellyfin` user (UID 999 inside the container) maps to UID 100999 on the host. Your bind-mounted media directory needs to be readable by UID 100999. Fix it with `chown -R 100999:100999 /mnt/nas/media` on the host, or use ACLs if the mount is shared with other services.\n\n## Container 6: Authentik — Self-Hosted Identity Provider\n\n**CT ID**: 205 | **RAM**: 1024 MB | **Disk**: 10 GB | **IP**: 192.168.1.15\n\nAuthentik is a self-hosted identity provider that adds SSO, OAuth2, LDAP, and SAML to your homelab. Once running, Nginx Proxy Manager can forward authentication to Authentik before proxying any service — meaning Vaultwarden, Jellyfin, and Uptime Kuma all sit behind a single login page without modifying those applications at all.\n\nAuthentik requires PostgreSQL and Redis, making Docker Compose the only sane choice here:\n\n```bash\napt update && apt install -y docker.io docker-compose-plugin\nmkdir /opt/authentik && cd /opt/authentik\n```\n\nDownload the official compose file from the Authentik documentation, then generate secrets:\n\n```bash\necho \"PG_PASS=$(openssl rand -base64 36 | tr -d '=+/')\" >> .env\necho \"AUTHENTIK_SECRET_KEY=$(openssl rand -base64 60 | tr -d '=+/')\" >> .env\necho \"AUTHENTIK_ERROR_REPORTING__ENABLED=false\" >> .env\n```\n\n```bash\ndocker compose pull && docker compose up -d\n```\n\nFirst startup takes 2–3 minutes while Authentik runs database migrations. Complete setup at `http://192.168.1.15:9000/if/flow/initial-setup/`.\n\n**Gotcha**: The default Authentik compose file uses `latest` image tags. Pin to a specific release (e.g., `ghcr.io/goauthentik/server:2024.12.3`) before your first pull. Authentik occasionally ships breaking API changes between minor versions, and an unattended `docker compose pull` on the wrong day will break SSO for every proxied service simultaneously.\n\n**Worth the complexity?** Only if you have five or more services to protect. For a two-container setup, HTTP Basic Auth through NPM is sufficient. Authentik pays off when you want audit logs, TOTP enforcement, and a single logout that propagates across all services at once.\n\n## Resource Planning: Running All Six on One Node\n\n| Container | RAM Allocated | RAM at Idle | vCPUs | Disk |\n|---|---|---|---|---|\n| Pi-hole | 256 MB | 70 MB | 1 | 4 GB |\n| Nginx Proxy Manager | 512 MB | 180 MB | 1 | 8 GB |\n| Vaultwarden | 256 MB | 25 MB | 1 | 4 GB |\n| Uptime Kuma | 256 MB | 90 MB | 1 | 4 GB |\n| Jellyfin | 1024 MB | 220 MB | 2 | 8 GB + media |\n| Authentik | 1024 MB | 450 MB | 2 | 10 GB |\n| **Total** | **3328 MB** | **~1035 MB** | **8** | **38 GB** |\n\nAn 8 GB node handles the full stack with headroom. On a 4 GB node, drop Authentik — it's the heaviest container and the least essential for a basic homelab. For guidance on node selection, storage layout, and networking to support this kind of infrastructure, the guide on [building a private cloud at home with Proxmox VE](/articles/build-private-cloud-home-proxmox-ve/) covers the hardware decisions that set the foundation.\n\n## Security Hardening for the Container Stack\n\nAll six containers handle sensitive data. A few non-negotiable steps before you consider this stack production-ready:\n\n- **Restrict admin ports**: Lock ports 81 (NPM admin), 3001 (Uptime Kuma), and 9000 (Authentik) to your LAN subnet using Proxmox firewall rules. The [Proxmox firewall and SSH hardening guide](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) covers datacenter-level rules that apply at the container level without touching iptables manually.\n- **Schedule backups**: All six containers store persistent state on disk. Configure a Proxmox backup job for each CT in Datacenter → Backup, targeting PBS or an NFS share. Weekly retention of two backups is the minimum viable safety net.\n- **Pin versions**: Vaultwarden and Authentik are security-critical. Never run `latest` tags — pin to a specific version and update deliberately after reading the changelog.\n- **Enable TOTP**: Pi-hole, NPM, Uptime Kuma, and Authentik all support TOTP. Enable it on each admin account before exposing any service through NPM to the internet.\n\n## Conclusion\n\nThese six LXC containers — Pi-hole, Nginx Proxy Manager, Vaultwarden, Uptime Kuma, Jellyfin, and Authentik — give you a complete homelab services layer that runs comfortably on a single 8 GB Proxmox VE 9.1 node. Deploy them in order: DNS first, reverse proxy second, then everything else behind it. Once Authentik is in place, the natural next step is wiring its forward-auth middleware into each NPM proxy host — a 10-minute configuration that replaces per-service login prompts with a single SSO portal covering your entire homelab.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-homelab-essential-lxc-containers/",
            "title": "6 Must-Have LXC Containers for Your Proxmox Homelab",
            "summary": "Six LXC containers that cover the majority of Proxmox homelab infrastructure needs, with exact RAM specs, pct commands, and real-world gotchas for each.",
            "date_modified": "2026-04-29T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "lxc",
                "homelab",
                "pihole",
                "jellyfin",
                "nginx-proxy-manager"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/migrate-bare-metal-truenas-proxmox-zfs/",
            "content_html": "\nThe safest way to move a running bare-metal TrueNAS machine to a Proxmox VM is to pass your storage controller directly to the guest — either an HBA via PCIe passthrough or individual drives via disk passthrough. Done right, TrueNAS imports its existing ZFS pools on first boot inside the VM, your data stays intact, and Proxmox never touches the pool metadata.\n\n## Key Takeaways\n\n- **HBA passthrough**: The cleanest path — pass the entire controller to the VM so Proxmox never sees the pool drives.\n- **Disk passthrough**: Works when you can't pass the full HBA; always use `/dev/disk/by-id/` paths, never `/dev/sdX`.\n- **Export pools first**: If Proxmox has auto-imported your ZFS pools on the host, export them before attaching drives to the VM.\n- **TrueNAS SCALE 24.10**: Runs as a Proxmox VM with minor config tweaks; TrueNAS CORE works identically.\n- **Risk window**: Data loss is most likely during the brief moment when drives are attached to both host and VM — don't let that happen.\n\n## Why Virtualize TrueNAS Instead of Running It Bare Metal\n\nRunning TrueNAS on bare metal is fine until you want to share the server. A dedicated NAS machine locks up hardware that could also run VMs, containers, and backup jobs. Virtualizing on Proxmox gives you VM-level snapshots before TrueNAS updates, flexible resource allocation without touching hardware, and one unified management UI for your NAS and your [homelab VMs](/articles/build-private-cloud-home-proxmox-ve/).\n\nThe tradeoff: the storage controller must be either passed through to the VM or replaced with virtual block devices. If your pools live on a dedicated PCIe HBA, passthrough is straightforward. If they're on motherboard SATA ports, you'll need individual disk passthrough — which introduces IOMMU group constraints covered below.\n\n## What You Need Before You Start\n\n- CPU with VT-d (Intel) or AMD-Vi enabled in UEFI\n- At least 16 GB RAM (TrueNAS SCALE 24.10 enforces this minimum)\n- A separate boot drive for Proxmox — not one of your pool drives\n- TrueNAS SCALE 24.10.2 installer ISO\n\nConfirm IOMMU is active on the Proxmox host:\n\n```bash\ndmesg | grep -e DMAR -e IOMMU | head -20\n```\n\nIf the output is empty, enable it in GRUB:\n\n```bash\n# /etc/default/grub\nGRUB_CMDLINE_LINUX_DEFAULT=\"quiet intel_iommu=on iommu=pt\"\n# AMD: replace intel_iommu=on with amd_iommu=on\n```\n\n```bash\nupdate-grub && reboot\n```\n\nThe `iommu=pt` flag reduces overhead for devices the host isn't passing through — relevant when Proxmox itself is using NVMe while pool drives go straight to the TrueNAS VM.\n\n## How to Stop Proxmox from Importing Your ZFS Pools\n\nThis is the most common gotcha. Proxmox boots with pool drives attached, ZFS auto-imports every pool it finds — including TrueNAS pools. If the host holds a pool while the VM tries to import it, you get a failed import or silent metadata corruption.\n\nCheck what's already imported:\n\n```bash\nzpool status\n```\n\nIf your TrueNAS pool names appear here, clear them from the cache file and export them:\n\n```bash\nzpool set cachefile=none tank\nzpool export tank\n```\n\nRun this for every pool being migrated. `cachefile=none` removes the pool from `/etc/zfs/zpool.cache` so it won't auto-import on the next reboot. Pass those drives to the TrueNAS VM immediately — if you reboot Proxmox first, ZFS will import them again.\n\n## Option 1: Pass Through an HBA Controller (Recommended)\n\nFor drives connected to a dedicated PCIe HBA — any IT-mode card, LSI 9207-8i, LSI 9300-8i — this is the cleanest approach. The host never sees the drives.\n\nFind the HBA's PCI address:\n\n```bash\nlspci -nn | grep -Ei \"lsi|megaraid|sas|storage controller\"\n# Example: 02:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS9207-8i [1000:0097]\n```\n\nVerify it's alone in its IOMMU group:\n\n```bash\nfind /sys/kernel/iommu_groups/ -name \"0000:02:00.0\" 2>/dev/null\n# Output: /sys/kernel/iommu_groups/14/devices/0000:02:00.0\nls /sys/kernel/iommu_groups/14/devices/\n```\n\nIf the group contains only the HBA (a co-located PCIe root port is fine), add it to the VM after creation:\n\n```bash\nqm set 110 --hostpci0 02:00.0,pcie=1,rombar=0\n```\n\n`pcie=1` is required for modern HBAs. `rombar=0` prevents a boot hang seen with some LSI cards inside QEMU.\n\n## Option 2: Pass Through Individual Disks\n\nWhen the SATA controller is part of the motherboard chipset and shared with the Proxmox boot drive, pass individual drives instead. Never use `/dev/sdX` — those letters reassign at boot. Use stable by-id paths:\n\n```bash\nls -la /dev/disk/by-id/ | grep -v part | grep ata\n# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1234567 -> ../../sdb\n# ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7654321 -> ../../sdc\n```\n\nAfter exporting the pool, attach each drive:\n\n```bash\nqm set 110 --scsi1 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1234567\nqm set 110 --scsi2 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7654321\n```\n\nRepeat for every drive. A 6-drive RAIDZ2 means six `qm set` commands. This is worth it only if you can't add a dedicated HBA. For long-term use, a used IT-mode card at $20–$40 on eBay is the cleaner investment.\n\n## Create the TrueNAS VM\n\n```bash\nqm create 110 \\\n  --name truenas-scale \\\n  --memory 32768 \\\n  --balloon 0 \\\n  --cores 4 \\\n  --sockets 1 \\\n  --cpu host \\\n  --machine q35 \\\n  --bios ovmf \\\n  --net0 virtio,bridge=vmbr0 \\\n  --ostype l26\n```\n\n`--balloon 0` prevents Proxmox from reclaiming RAM that TrueNAS is actively using as ZFS ARC cache.\n\nAdd the supporting disks and installer:\n\n```bash\n# EFI disk for OVMF boot\nqm set 110 --efidisk0 local-lvm:1,efitype=4m,pre-enroll-keys=0\n\n# OS boot disk — TrueNAS uses a mirrored boot pool, 32 GB minimum\nqm set 110 --scsi0 local-lvm:32,format=raw,iothread=1,ssd=1\n\n# Installer ISO\nqm set 110 --ide2 local:iso/TrueNAS-SCALE-24.10.2.iso,media=cdrom\n\n# Boot order: ISO first, then OS disk\nqm set 110 --boot order=\"ide2;scsi0\"\n```\n\nThe IOMMU mechanics are identical to [GPU passthrough on Proxmox](/articles/gpu-passthrough-proxmox-complete-guide/). If you've done GPU passthrough before, the `hostpci` setup will feel familiar — you're just passing storage instead of video. Now attach the HBA or individual disks from the previous sections.\n\n## Install TrueNAS and Import Your Existing Pools\n\nBoot the VM. The TrueNAS SCALE 24.10.2 installer completes in under five minutes on NVMe. Select the OS boot disk (`scsi0`) as the installation target — not the pool drives.\n\nAfter TrueNAS reboots into the dashboard, go to **Storage → Import Pool**. TrueNAS scans the attached drives and surfaces your existing ZFS pool. Select it and click **Import**. For a 20 TB pool this finishes in under 30 seconds — no data moves, only pool metadata is recognized.\n\nVerify from the TrueNAS shell or SSH:\n\n```bash\nzpool status\nzfs list -r\nzpool history tank | tail -20\n```\n\nAll vdevs should show `ONLINE`. If your most recent scrub event appears in the history output, the pool imported from the correct state.\n\n## Reconfigure Shares and Verify Before Decommissioning\n\nThe IP and interface name change because TrueNAS now has a `virtio` NIC. In the TrueNAS UI, go to **Network → Interfaces**, set a static IP, then update any SMB or NFS bindings that reference the old interface. Re-enable periodic scrub schedules under **Data Protection** — they don't carry over with a pool import.\n\nIf you have [Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) pointing at a TrueNAS share, update the storage definition with the new IP. The datastore contents are unchanged.\n\nBefore wiping the old bare-metal install, run a full scrub inside the VM:\n\n```bash\nzpool scrub tank\nwatch zpool status\n```\n\nExpect roughly 1 hour per 10 TB on spinning disks. Also check SMART data on all drives to confirm nothing new appeared during the migration:\n\n```bash\nfor disk in /dev/sdb /dev/sdc /dev/sdd /dev/sde; do\n  echo \"=== $disk ===\"\n  smartctl -a \"$disk\" | grep -E \"Reallocated|Pending|Uncorrectable\"\ndone\n```\n\nKeep the bare-metal server powered off but intact for at least a week. If a missing share or misconfigured permission surfaces, you'll want that fallback.\n\n## Conclusion\n\nWith an HBA or pool drives passed through directly, TrueNAS imports existing ZFS 2.2.x pools intact and your data never leaves the disks. The critical steps are exporting pools from the Proxmox host before attaching them to the VM, using `/dev/disk/by-id/` for individual disk passthrough, and running a post-import scrub to confirm clean pool state. Next, consider [hardening the Proxmox host with firewall rules and fail2ban](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) — the NAS is only as secure as the hypervisor it runs on.\n",
            "url": "https://proxmoxpulse.com/articles/migrate-bare-metal-truenas-proxmox-zfs/",
            "title": "Migrate Bare-Metal TrueNAS to Proxmox Without Data Loss",
            "summary": "Move your TrueNAS bare-metal installation to a Proxmox VM without touching your ZFS pools. HBA passthrough, disk passthrough, and pool import — step by step.",
            "date_modified": "2026-04-28T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "truenas",
                "zfs",
                "disk-passthrough",
                "hba-passthrough",
                "storage-migration"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-cpu-pinning-low-latency-vms/",
            "content_html": "\nCPU pinning on Proxmox VE 9 assigns each vCPU thread in a VM to a specific physical CPU core, bypassing the Linux scheduler's normal load-balancing across all available cores. The result is consistent, predictable latency — the kind that matters for a Windows gaming VM with GPU passthrough, a real-time audio workstation, or an API server where p99 latency is a hard SLA. By the end of this guide your VM will have dedicated cores, the host scheduler will leave them alone, and you'll have three commands to verify the pinning is actually working.\n\n## Key Takeaways\n\n- **cpuaffinity**: Run `qm set <vmid> --cpuaffinity 4-7` to bind a QEMU process and all its threads to specific physical cores\n- **isolcpus matters**: Without kernel-level core isolation, other processes can still preempt your pinned VM threads and cause jitter\n- **NUMA alignment**: On EPYC or multi-socket systems, pin memory and CPUs to the same NUMA node or every memory access crosses the interconnect\n- **Live migration caveat**: A VM with `cpu: host` set will not migrate to nodes with a different CPU microarchitecture — design for this upfront\n- **Not always worth it**: General-purpose VMs and idle containers get no benefit; reserve pinning for sustained latency-sensitive workloads\n\n## What CPU Pinning Actually Does Under the Hood\n\nWithout pinning, the Linux Completely Fair Scheduler (CFS) migrates QEMU threads across physical cores continuously. This is efficient for throughput, but every thread migration causes L1 and L2 cache evictions on the core the thread just left and cold-cache misses on the core it lands on. For a 4K gaming session or a Postgres database handling sub-100 ms queries, those cache evictions translate to visible stutters or latency spikes.\n\n`cpuaffinity` in Proxmox works by setting the cpuset cgroup for the entire QEMU process at VM start. Every thread QEMU spawns — vCPU threads, the I/O thread, device emulation threads — is constrained to the cores you specify. This is a process-level constraint rather than per-vCPU-thread pinning, but for the overwhelming majority of workloads it produces the same result with far less configuration overhead.\n\nThe kernel parameter `isolcpus` goes one step further: it removes specified cores from the scheduler's pool entirely. Nothing lands on an isolated core unless it explicitly requests affinity for that core. This is the difference between \"I want my VM on cores 4-7\" and \"nothing else touches cores 4-7, ever.\"\n\n## How to Check Your NUMA Topology First\n\nUnderstanding your hardware layout before picking cores is non-negotiable. Pinning to sibling hyperthreads on a saturated physical core, or crossing NUMA nodes, will make latency *worse* than no pinning.\n\n```bash\nlscpu --extended\n```\n\nLook at the `CPU`, `CORE`, `SOCKET`, and `NODE` columns. On an Intel Core i9-13900K, the 8 P-cores appear in hyperthreaded pairs — logical CPUs 0 and 16 share physical core 0, logical CPUs 1 and 17 share physical core 1, and so on. Always pin both hyperthreads of the same physical core together.\n\nFor NUMA topology on EPYC, Threadripper, or any dual-socket system:\n\n```bash\nnumactl --hardware\n```\n\nOn a single-socket consumer CPU this shows one node. On an EPYC 7302P, memory latency between the two NUMA nodes differs by roughly 30 ns — enough to add measurable jitter to database workloads.\n\nFor the most detailed view, install `hwloc` and render the full topology tree:\n\n```bash\napt install hwloc -y\nlstopo --no-io\n```\n\nThis shows L3 cache domains. On AMD CPUs each CCX shares its own L3, so keep all pinned cores within a single CCX. On a Ryzen 9 7950X with two 8-core CCDs, keeping the VM to one CCD means all 8 cores share the same 32 MB L3 — a meaningful cache advantage for workloads with hot working sets.\n\n## How to Isolate Cores from the Host Kernel\n\nThis step is optional for lightly loaded hosts but strongly recommended for gaming or real-time workloads. Open the GRUB config:\n\n```bash\nnano /etc/default/grub\n```\n\nAppend to `GRUB_CMDLINE_LINUX_DEFAULT`:\n\n```ini\nGRUB_CMDLINE_LINUX_DEFAULT=\"quiet isolcpus=4-7,12-15 nohz_full=4-7,12-15 rcu_nocbs=4-7,12-15\"\n```\n\n`nohz_full` stops the kernel from sending periodic timer interrupts to isolated cores, reducing jitter from hundreds of microseconds down to under 5 µs. `rcu_nocbs` offloads RCU callbacks off those same cores. All three options must reference the same core list.\n\nApply and reboot:\n\n```bash\nupdate-grub && reboot\n```\n\nAfter reboot, confirm isolation is active:\n\n```bash\ncat /sys/devices/system/cpu/isolated\n# Expected output: 4-7,12-15\n```\n\nLeave at least 2 physical cores (plus their hyperthreads) for the Proxmox host OS. On a 12-core i9-12900K I keep cores 0-3 for Proxmox — the web UI, storage daemons, PBS sync jobs, and any lightweight LXC containers all run there without touching the VM's dedicated P-cores.\n\n## How to Configure cpuaffinity on a Proxmox VM\n\nThe `cpuaffinity` option accepts a comma-separated list of CPU IDs or ranges matching the logical CPU numbers from `lscpu`.\n\n```bash\nqm set 100 --cpuaffinity 4-7\n```\n\nOr edit the config file directly:\n\n```bash\nnano /etc/pve/qemu-server/100.conf\n```\n\n```ini\ncpuaffinity: 4-7\n```\n\nThe number of vCPUs in the VM does not need to exactly match the pinned range. If you pin to `4-7` (4 logical CPUs) and the VM has `cores: 4,sockets: 1`, each vCPU thread lands on one core. QEMU's I/O thread and device emulation threads are also constrained to that range, but they are lightweight and do not meaningfully compete with the vCPU threads.\n\nFor a gaming VM with 8 vCPUs across 4 physical P-cores with hyperthreading:\n\n```bash\nqm set 100 --cpuaffinity 4-7,12-15\nqm set 100 --cores 8 --sockets 1\nqm set 100 --cpu host\n```\n\nThe `cpu: host` flag passes all physical CPU features directly to the guest, including AVX-512 and TSC invariant mode. Pair this with [the full GPU passthrough configuration](/articles/gpu-passthrough-proxmox-complete-guide/) to get consistent frame times from a dedicated gaming or AI VM.\n\nThe tradeoff: `cpu: host` makes the VM non-migratable to hosts with a different CPU microarchitecture. If you're running [a K3s Kubernetes cluster on Proxmox](/articles/k3s-kubernetes-cluster-proxmox-vms/) and need live migration across heterogeneous nodes, stick with `cpu: x86-64-v3` and skip `cpuaffinity` on those VMs — portability matters more than the marginal latency gain.\n\n## NUMA Alignment: Bind Memory to the Right Node\n\nOn any multi-NUMA system, memory accesses that cross NUMA nodes add 20-40 ns per access. If your pinned cores are on NUMA node 0, bind the VM's RAM to node 0 as well.\n\nConfirm which NUMA node owns your target cores:\n\n```bash\nnumactl --hardware\n# Look for \"node 0 cpus:\" and \"node 1 cpus:\" lines\n```\n\nThen add NUMA binding to the VM config:\n\n```bash\nnano /etc/pve/qemu-server/100.conf\n```\n\n```ini\ncpuaffinity: 4-7\nnuma: 1\nnuma0: cpus=4-7,hostnodes=0,memory=16384,policy=bind\n```\n\n`policy=bind` hard-allocates guest RAM from NUMA node 0 only. If node 0 runs out of free memory the VM will fail to start rather than silently falling back to slower remote memory. `policy=preferred` is softer but can mask a memory budget mistake. Use `bind` so you know immediately if something is wrong.\n\nOn a properly aligned Proxmox EPYC system, STREAM TRIAD benchmark scores inside the guest improve 20-35% compared to unaligned placement. On real-world workloads like Postgres or ffmpeg encoding, expect 8-15% throughput improvement with p99 latency 30-50% lower.\n\n## Verifying the Pinning Is Actually Working\n\nStart the VM and locate the QEMU process ID:\n\n```bash\nqm start 100\npgrep -a qemu-system-x86 | grep \" 100 \"\n```\n\nCheck process-level affinity:\n\n```bash\ntaskset -cp <pid>\n# pid 12345's current affinity list: 4-7,12-15\n```\n\nInspect individual thread placement:\n\n```bash\nps -eLo pid,tid,psr,comm | grep qemu | head -20\n```\n\nThe `PSR` column shows which physical CPU each thread is currently scheduled on. Every thread should show a value within your pinned range. If any land outside it, the `isolcpus` parameter was not applied — run `cat /proc/cmdline` to confirm it appears in the active boot parameters.\n\nFor a latency measurement inside a Linux guest, install and run `cyclictest` from the `rt-tests` package:\n\n```bash\napt install rt-tests -y\ncyclictest --mlockall --smp --priority=80 --interval=200 --distance=0 --loops=10000\n```\n\nOn a pinned, isolated 4-core setup with `nohz_full` on a 13th-gen Intel host, worst-case latency consistently lands under 80 µs. Without pinning on the same hardware under moderate host load, worst-case values hit 3-8 ms. That is a 40x difference at the p99.99 — the exact improvement that eliminates audio dropouts in a DAW or micro-stutters in a GPU passthrough gaming VM.\n\n## When CPU Pinning Is Worth the Effort (and When It Isn't)\n\nWorth pinning:\n\n- **GPU passthrough gaming VMs**: frame time consistency depends on stable vCPU scheduling; even 2 ms jitter shows up as stutters at 144 Hz\n- **Audio production workstations**: real-time audio at 48 kHz with a 128-sample buffer needs scheduling latency well under 3 ms\n- **Low-latency databases**: Postgres, Redis, or ClickHouse under sustained query load benefit measurably from cache-warm cores\n- **[Home Assistant with heavy integrations](/articles/home-assistant-os-on-proxmox-2026-setup-guide/)**: fast polling loops on a loaded host miss intervals without dedicated cores\n\nNot worth pinning:\n\n- **General-purpose web servers**: CFS handles bursty traffic better than static affinity; pinning forfeits burst headroom\n- **LXC containers**: use `cpulimit` cgroup throttling instead — the overhead-to-benefit ratio for LXC pinning is rarely justified\n- **Lightly loaded single-VM hosts**: CFS already gives a near-idle VM the run of all available cores\n- **Batch compute VMs**: a VM that runs a 30-second ffmpeg transcode every few hours will not notice pinning\n\nThe gotcha I ran into: pinning a VM to cores 0-3 on a system where the Proxmox host was using core 0 for NVMe interrupt handling caused *more* latency variance than no pinning at all. Always check `cat /proc/irq/*/smp_affinity_list` after setting up pinning, and move storage and network IRQs off your isolated cores with `echo <cpu-mask> > /proc/irq/<n>/smp_affinity`.\n\n## Conclusion\n\nCPU pinning on Proxmox VE 9 is a 20-minute configuration change with immediate, measurable results for latency-sensitive workloads: `isolcpus` in GRUB, `cpuaffinity` in the VM config, and a `taskset` check to confirm. The logical next step is interrupt affinity — disable `irqbalance` and manually assign NVMe and PCIe interrupts to your reserved host cores, or run `irqbalance` with a `--banirq` list for your storage and network devices. That is where the last few microseconds of jitter typically hide on a well-tuned Proxmox host.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-cpu-pinning-low-latency-vms/",
            "title": "CPU Pinning on Proxmox for Low-Latency VM Workloads",
            "summary": "Pin VM cores on Proxmox VE 9 to eliminate scheduler jitter and cut latency. Covers isolcpus, cpuaffinity, NUMA alignment, and when pinning actually helps.",
            "date_modified": "2026-04-27T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "cpu-pinning",
                "numa",
                "kvm",
                "vm-performance",
                "proxmox"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-lxc-bind-mounts-host-storage/",
            "content_html": "\nBind mounts let an LXC container read from and write to a directory that lives on the Proxmox host — same data, no copying, no NFS required. In under ten minutes you can have a container writing logs, media files, or database dumps directly to a host path you control. This guide covers the Proxmox VE 9.1 web UI method, the config-file method, and the UID/GID remapping issue that trips up almost everyone the first time they work with an unprivileged container.\n\n## Key Takeaways\n\n- **How it works**: A bind mount makes a host directory appear at a specific path inside the container — changes in either location are immediate and atomic.\n- **UID/GID shift**: Unprivileged containers map container UID 0 → host UID 100000 by default; the host directory must be owned by the shifted UID or writes fail with permission errors.\n- **Config syntax**: Bind mounts appear as `mp0`, `mp1`, etc. in `/etc/pve/lxc/<VMID>.conf` — for example `mp0: /host/path,mp=/container/path`.\n- **Privileged containers skip the shift**: A privileged container avoids UID remapping but trades namespace isolation for convenience.\n- **Exclude from backups**: Add `backup=0` to any large mount point entry to keep media libraries out of `vzdump` archives.\n\n## What Are LXC Bind Mounts and When Do You Need Them\n\nLXC containers share the Proxmox host kernel but are isolated by namespaces and cgroups. That makes them fast to start and cheap on RAM, and it also means they can safely access host directories if you configure mount points correctly.\n\nYou reach for a bind mount when:\n\n- Multiple containers need access to the same dataset — a media library read by Jellyfin and written by Sonarr simultaneously\n- You want application data outside the container rootfs so it survives `pct restore` or `pct destroy`\n- You are running Docker inside an LXC container and want the Docker volume data on a host path you can snapshot — exactly the setup described in [Running Docker Inside LXC Containers on Proxmox](/articles/docker-inside-lxc-containers-proxmox/)\n- You want to snapshot application data independently via ZFS without snapshotting the whole container disk image\n\nA bind mount is **not** a copy. Write from the container, the host sees the change immediately. Delete a file from the host side, the container loses it. Plan your data layout before you start.\n\n## How to Add a Bind Mount from the Proxmox UI\n\nIn Proxmox VE 9.1, mount points live in the container's **Resources** tab.\n\n1. Select the container in the left pane, click **Resources** → **Add** → **Mount Point**.\n2. Set **Storage** to `Directory` and enter the **Host Path** — an absolute path to an existing directory on the Proxmox host (e.g., `/mnt/data/media`).\n3. Set **Mount Point** to the path where it should appear inside the container (e.g., `/media`).\n4. Optionally tick **Read-only** to prevent container writes to the host path.\n5. Click **Add**, then restart the container: **More** → **Reboot** or run `pct reboot <VMID>` from the shell.\n\n> **Gotcha**: The UI will not create the host directory for you. If the path does not exist on disk, Proxmox will accept the config but the container will fail to start with a vague `lxc-start` error. Create it first:\n\n```bash\nmkdir -p /mnt/data/media\n```\n\n## Configuring Bind Mounts via the LXC Config File\n\nFor scripted deployments, editing the config directly is faster and easier to put under version control.\n\n```bash\nnano /etc/pve/lxc/101.conf\n```\n\nAdd a mount point entry at the bottom:\n\n```ini\nmp0: /mnt/data/media,mp=/media\n```\n\nMultiple mount points use `mp0` through `mp9`:\n\n```ini\nmp0: /mnt/data/media,mp=/media\nmp1: /mnt/data/config,mp=/config,ro=1\n```\n\nFull set of options supported in Proxmox VE 9.1:\n\n| Option | Example | Effect |\n|--------|---------|--------|\n| `mp=` | `mp=/data` | Container-side mount path (required) |\n| `ro=1` | `ro=1` | Mount read-only inside the container |\n| `backup=0` | `backup=0` | Exclude this path from `vzdump` backups |\n| `replicate=0` | `replicate=0` | Skip during ZFS or PBS replication |\n| `shared=1` | `shared=1` | Mark as cluster-shared storage |\n\nAfter editing the config, restart the container and verify the mount:\n\n```bash\npct reboot 101\npct exec 101 -- df -h /media\n```\n\n## The UID/GID Remapping Problem in Unprivileged Containers\n\nThis is where almost everyone gets burned the first time. Unprivileged LXC containers use a UID/GID mapping defined in `/etc/subuid` and `/etc/subgid` on the host. Proxmox ships with:\n\n```bash\ncat /etc/subuid\n# root:100000:65536\n```\n\nThis means container UID 0 (root) maps to host UID 100000, container UID 1000 maps to host UID 101000, and so on. When a container process writes to a bind-mounted host directory, the host kernel sees host UID 101000, not UID 1000. The directory permission check happens against the shifted UID.\n\n### Calculating the Mapped Host UID\n\n```bash\n# Inside the container, check the running user:\nid\n# uid=1000(ubuntu) gid=1000(ubuntu)\n\n# On the host, that container UID maps to:\n# host_uid = 100000 + container_uid = 101000\n```\n\n### Option 1: Chown the Host Directory to the Shifted UID\n\nThe cleanest fix — change ownership on the host to the mapped UID:\n\n```bash\nchown -R 101000:101000 /mnt/data/media\n```\n\nContainer processes running as UID 1000 can now read and write the directory transparently, with no special config beyond the mount point entry itself.\n\n### Option 2: Use POSIX ACLs for Shared Paths\n\nACLs are more surgical when multiple containers or host users need access to the same path:\n\n```bash\n# Install acl if not already present (Proxmox host is Debian-based)\napt install acl\n\n# Grant the container's mapped UID read/write/execute access\nsetfacl -m u:101000:rwx /mnt/data/media\n\n# Default ACL so new files and subdirectories inherit the rule\nsetfacl -d -m u:101000:rwx /mnt/data/media\n```\n\n### Option 3: Map Container Root to Host Root\n\nFor containers where the container's root user needs to own host files, add a custom UID map to the container config. This is a targeted override, not a global switch:\n\n```ini\nlxc.idmap: u 0 0 1\nlxc.idmap: u 1 100001 65535\nlxc.idmap: g 0 0 1\nlxc.idmap: g 1 100001 65535\n```\n\nThis maps container UID 0 to host UID 0 while keeping all other UIDs shifted. The host directory just needs to be owned by root:\n\n```bash\nchown root:root /mnt/data/special\nchmod 755 /mnt/data/special\n```\n\n> **Security note**: Mapping container root to host root reduces namespace isolation — a compromised container root can affect any root-owned bind-mounted path on the host. This is acceptable for trusted internal workloads. For the broader security picture on LXC and Proxmox isolation, see [Hardening Proxmox VE: Firewall, fail2ban, and SSH Security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/).\n\n## Bind Mounts in Privileged Containers\n\nPrivileged containers (`unprivileged: 0` in the config) have no UID remapping — container UID 1000 is host UID 1000. Standard Unix permissions apply directly:\n\n```bash\nchown -R 1000:1000 /mnt/data/media\n```\n\nPrivileged containers are the simpler path when you are running Docker inside LXC. Docker's overlay2 storage driver needs real root access, so the LXC container must be privileged anyway. In that setup — like the Portainer and Dockge workflow described in [Managing Docker on Proxmox with Portainer and Dockge](/articles/managing-docker-on-proxmox-with-portainer-and-dockge/) — bind-mounting Docker volume directories from the host just works without any UID arithmetic.\n\n## Real-World Configs That Work\n\n### Media Library Shared Across Two Containers\n\nHost ZFS dataset `/mnt/tank/media` mounted read-only into Jellyfin, read-write into Sonarr, with both excluded from `vzdump`:\n\n```ini\n# /etc/pve/lxc/200.conf  (Jellyfin — consumer)\nmp0: /mnt/tank/media,mp=/media,ro=1,backup=0\n\n# /etc/pve/lxc/201.conf  (Sonarr — writer)\nmp0: /mnt/tank/media,mp=/media,backup=0\n```\n\n```bash\n# Both containers run as UID 1000 internally\nchown -R 101000:101000 /mnt/tank/media\n```\n\n### Docker Data Directory on a Host ZFS Dataset\n\nRun Docker inside a privileged LXC but keep `/var/lib/docker` on a ZFS dataset you can snapshot independently:\n\n```ini\n# /etc/pve/lxc/300.conf  (privileged container: unprivileged: 0)\nmp0: /mnt/ssd/docker-data,mp=/var/lib/docker\n```\n\n```bash\nchown root:root /mnt/ssd/docker-data\nchmod 710 /mnt/ssd/docker-data\n```\n\nExpect Docker to initialize its overlay2 storage driver the first time the container starts — about 10 to 15 seconds on a fresh ZFS dataset before the daemon comes up.\n\n### Read-Only Config Injection\n\nManage application configs centrally on the host; containers pick up changes on next restart:\n\n```ini\nmp0: /mnt/configs/nginx,mp=/etc/nginx,ro=1\nmp1: /mnt/configs/app,mp=/app/config,ro=1\n```\n\nThis pattern works well for CI/CD pipelines where Ansible or a deploy script writes the host directory and container restarts pull in the new config.\n\n## Troubleshooting Bind Mount Failures\n\n**Container fails to start, errors in the journal**\n\n```bash\njournalctl -u pve-container@101.service --no-pager | tail -40\n```\n\nThe most common cause is the host directory not existing or a typo in the config path. Verify:\n\n```bash\nls -la /mnt/data/media\n```\n\n**Permission denied inside the container**\n\nCheck the effective ownership from the host:\n\n```bash\nls -lan /mnt/data/media\n# Owner UID should match 100000 + container_uid\n```\n\nFix it:\n\n```bash\nchown -R 101000:101000 /mnt/data/media\n```\n\n**Files created inside the container have large UIDs when viewed from the host**\n\nExpected behavior for unprivileged containers. UID 101000 on the host is UID 1000 inside the container. Use `chown` with the mapped UID when you need to manipulate these files from the host side.\n\n**Mount not present after `pct restore`**\n\n`pct restore` rebuilds the container config from the backup archive. Mount point entries added after the backup was taken are not included. Re-add the `mp` lines to the config manually, or ensure you capture the config file as part of your backup procedure. For a robust backup strategy that covers both containers and host datasets, [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) lays out a production-grade approach.\n\n**`vzdump` backups are enormous**\n\nA bind-mounted media library can balloon a container backup from 2 GB to 2 TB. Add `backup=0` to the mount point line:\n\n```ini\nmp0: /mnt/tank/media,mp=/media,ro=1,backup=0\n```\n\nBack up the host dataset separately via PBS or ZFS send.\n\n## Conclusion\n\nBind mounts in Proxmox LXC are straightforward once you have the UID shift internalized: for unprivileged containers, `chown 101000:101000` on the host directory is the fix for nearly every permission error you will encounter. Add `mp0: /host/path,mp=/container/path` to the container config, restart, and verify with `pct exec`. The pattern scales cleanly to a dozen containers sharing the same ZFS datasets. Your next step: put those host directories on a ZFS dataset with hourly snapshots — five minutes of work that gives you point-in-time recovery for all your container data without touching the containers themselves.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-lxc-bind-mounts-host-storage/",
            "title": "Proxmox LXC Bind Mounts: Share Host Paths with Containers",
            "summary": "Configure Proxmox LXC bind mounts to share host directories with containers, fix UID/GID mapping in unprivileged containers, and avoid permission pitfalls.",
            "date_modified": "2026-04-26T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "lxc",
                "proxmox",
                "bind-mounts",
                "storage",
                "containers"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-notifications-email-webhooks/",
            "content_html": "\nProxmox VE 8.1 introduced a proper centralized notification framework — SMTP endpoints, webhook targets, and matcher-based routing all managed from one config file or the web UI. This guide walks you through setting up email alerts and HTTP webhooks so backup failures, HA events, and storage errors reach you the moment they happen, whether that's your inbox, Slack, or an ntfy topic on your phone.\n\n## Key Takeaways\n\n- **Requires PVE 8.1+**: The notification framework was introduced in Proxmox VE 8.1; earlier versions only support per-job email fields.\n- **Two endpoint types**: SMTP for email and webhook for any HTTP POST target (ntfy, Gotify, Slack, Telegram).\n- **Matchers control routing**: Filter by event type and severity — silence info noise and only page for actual errors.\n- **Cluster-aware**: The config in `/etc/pve/notifications.cfg` replicates to all nodes automatically.\n- **Always test first**: Fire a test notification before trusting backup alerts to actually land.\n\n## How the Proxmox VE 8.1 Notification Framework Works\n\nBefore 8.1, notification setup meant copying an email address into every backup job and hoping sendmail worked. The new framework flips this: you define named **endpoints** (where to send) and **matchers** (what triggers a send), then the system routes events automatically.\n\nEverything lives in `/etc/pve/notifications.cfg` on the Proxmox cluster filesystem. In a cluster, it replicates to all nodes the same way VM configs do. A skeleton config looks like this:\n\n```ini\nsmtp: my-smtp\n    server mail.example.com\n    port 587\n    mode starttls\n    username alerts@example.com\n    from-address alerts@example.com\n    mailto admin@yourdomain.com\n\nmatcher: default-matcher\n    target my-smtp\n```\n\nThe framework ships with a built-in `mail-to-root` endpoint that uses the local `sendmail` binary. In most homelabs, Postfix isn't configured as an SMTP relay on the Proxmox host, so that target silently fails. The fix is replacing the default matcher's target with an explicit SMTP endpoint you control.\n\n## Setting Up an SMTP Endpoint\n\nConfigure SMTP either in the web UI at **Datacenter → Notifications → Add → SMTP Endpoint**, or via `pvesh`:\n\n```bash\npvesh create /cluster/notifications/endpoints/smtp \\\n  --name my-smtp \\\n  --server smtp.fastmail.com \\\n  --port 465 \\\n  --mode tls \\\n  --username me@fastmail.com \\\n  --password 'yourpassword' \\\n  --from-address proxmox@fastmail.com \\\n  --mailto admin@yourdomain.com\n```\n\nFor Gmail, use `smtp.gmail.com`, port 587, `starttls` mode, and an **App Password** — Google has blocked regular credentials for SMTP since 2022, so your account password will not work here.\n\n### TLS Mode Reference\n\n| Mode | Port | Use When |\n|------|------|----------|\n| `starttls` | 587 | Standard submission; negotiates TLS after connect |\n| `tls` | 465 | SMTPS; TLS from the first byte |\n| `insecure` | 25 | Local relay only — never send credentials this way |\n\nThe `--mailto-user` flag accepts a Proxmox user ID such as `root@pam` and resolves the email address from that user's profile under **Datacenter → Permissions → Users**. The `--mailto` flag takes a raw email address directly. To notify multiple recipients, repeat the flag:\n\n```bash\npvesh create /cluster/notifications/endpoints/smtp \\\n  --name team-smtp \\\n  --server smtp.example.com \\\n  --port 587 \\\n  --mode starttls \\\n  --username alerts@example.com \\\n  --password 'yourpassword' \\\n  --from-address proxmox@example.com \\\n  --mailto ops@company.com \\\n  --mailto oncall@company.com\n```\n\nIf you run Proxmox Backup Server alongside Proxmox VE, PBS has its own notification framework configured separately in the PBS web UI. The concepts are identical but the config is a separate service — see [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) for the full PBS workflow including retention and offsite replication.\n\n## Testing the SMTP Endpoint\n\nFire a test message before trusting this for production backup alerts:\n\n```bash\npvesh create /cluster/notifications/endpoints/smtp/my-smtp/test\n```\n\nExpect the email within 60 seconds. If nothing arrives, check the log:\n\n```bash\njournalctl -u postfix --since \"5 minutes ago\"\n```\n\nThe most common failure causes:\n\n- **Port 25 blocked outbound** by your ISP or upstream router — use 587 or 465 instead\n- **Wrong app password** — Gmail and Fastmail require a dedicated SMTP app password, not your account password\n- **TLS mode mismatch** — flip between `starttls` and `tls` if you see TLS handshake errors in the log\n- **Firewall blocking outbound SMTP** — if you run strict egress rules on the Proxmox host, you'll need to allow TCP 465 or 587 outbound; the [Hardening Proxmox VE: Firewall, fail2ban, and SSH Security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) guide covers how those egress rules are structured\n\n## Adding a Webhook Endpoint\n\nWebhook endpoints send an HTTP POST to any URL — ntfy, Gotify, Slack, Telegram bots, Mattermost, or a custom receiver. The webhook endpoint type has been available since Proxmox VE 8.1.\n\nFor **ntfy** (a lightweight push notification server that runs cleanly in an LXC container):\n\n```bash\npvesh create /cluster/notifications/endpoints/webhook \\\n  --name ntfy-homelab \\\n  --url 'https://ntfy.example.com/proxmox-alerts' \\\n  --method POST \\\n  --header 'Authorization:Bearer your-ntfy-token' \\\n  --body '{\"title\":\"{{title}}\",\"message\":\"{{message}}\"}'\n```\n\nFor **Slack** via an incoming webhook URL:\n\n```bash\npvesh create /cluster/notifications/endpoints/webhook \\\n  --name slack-ops \\\n  --url 'https://hooks.slack.com/services/T00000000/B00000000/XXXX' \\\n  --method POST \\\n  --body '{\"text\":\"*{{title}}*\\n{{message}}\"}'\n```\n\nThe body field supports Handlebars-style template variables:\n\n- `{{title}}` — short event subject\n- `{{message}}` — full event body\n- `{{severity}}` — one of `info`, `notice`, `warning`, or `error`\n- `{{timestamp}}` — Unix epoch seconds\n\n## Configuring Matchers to Route Alerts\n\nA matcher without filters is a catch-all — it sends every event to its target:\n\n```bash\npvesh create /cluster/notifications/matchers \\\n  --name catch-all \\\n  --target my-smtp\n```\n\nAdd `--filter-severity` to cut down on noise. This matcher sends only errors and warnings to the webhook:\n\n```bash\npvesh create /cluster/notifications/matchers \\\n  --name critical-to-ntfy \\\n  --target ntfy-homelab \\\n  --filter-severity '[\"error\",\"warning\"]'\n```\n\nFilter by event type with `--filter-type` to catch only backup job results:\n\n```bash\npvesh create /cluster/notifications/matchers \\\n  --name backup-alerts \\\n  --target my-smtp \\\n  --filter-type '[\"vzdump\"]' \\\n  --filter-severity '[\"error\",\"warning\"]'\n```\n\nMultiple matchers are independent — an error event can match both a `filter-severity error` webhook matcher and a catch-all email matcher, and both will fire. There is no stop-processing rule.\n\n## What Events Actually Generate Notifications\n\nThis catches people off guard: the notification framework only fires for system-initiated events, not manual UI actions.\n\n| Event Type | What Triggers It | Severity |\n|------------|-----------------|----------|\n| `vzdump` | Backup job finishes (success, fail, or warning) | `info` / `error` / `warning` |\n| `replication` | Storage replication job completes or fails | `info` or `error` |\n| `package-updates` | Weekly update scan finds available packages | `notice` |\n| `fencing` | HA manager fences a node | `error` |\n| `cluster` | Node join/leave or quorum change | `warning` or `error` |\n\nManual operations — cloning a VM, live migration, snapshot creation, moving a disk between pools — do **not** generate notifications. Expecting a ping when you hand-migrate a VM and getting silence is the most common \"is this broken?\" moment for new users. It's not broken.\n\n## A Complete Working Config\n\nHere is the exact `/etc/pve/notifications.cfg` running on a three-node Proxmox VE 9.1 cluster. Email gets everything; ntfy only gets errors, so my phone buzzes for actual problems and not for 50 successful nightly backup completions:\n\n```ini\nsmtp: fastmail\n    server smtp.fastmail.com\n    port 465\n    mode tls\n    username me@fastmail.com\n    from-address proxmox@fastmail.com\n    mailto me@fastmail.com\n    comment Email catch-all\n\nwebhook: ntfy\n    url https://ntfy.sh/my-private-topic-xxxxxx\n    method POST\n    header Authorization:Bearer ntfy-token-here\n    body {\"title\":\"{{title}}\",\"message\":\"{{message}}\"}\n    comment Phone push for errors only\n\nmatcher: errors-to-phone\n    target ntfy\n    filter-severity error\n\nmatcher: everything-to-email\n    target fastmail\n    comment Catch-all\n```\n\nEdit this file directly for bulk changes. No service restart is needed — Proxmox reads it on demand. The file is plain text, and version-controlling it alongside your other infrastructure configs is straightforward.\n\n### What About the Default mail-to-root Target?\n\nProxmox ships with a built-in `mail-to-root` SMTP endpoint and a `default-matcher` that points to it. If `sendmail` isn't configured on your Proxmox host — which is true for most homelab installations — this silently fails. Go to **Datacenter → Notifications**, edit the `default-matcher`, and point it to your named SMTP endpoint instead.\n\n## Migrating from Per-Job Email to Matchers\n\nIf you set email addresses directly on vzdump jobs before Proxmox VE 8.1, those inline fields still work for backward compatibility. But to get matcher routing and webhooks, you need to clear them. Check what you have first:\n\n```bash\ngrep -r \"mailto\" /etc/pve/jobs.cfg\n```\n\nFor each job with an inline `mailto`, clear the field under **Datacenter → Backup → Edit**. For a large number of jobs, edit `/etc/pve/jobs.cfg` directly to strip `mailto` lines, then verify none remain:\n\n```bash\npvesh get /cluster/jobs/vzdump\n```\n\nConfirm no returned jobs carry a `mailto` field before relying on matchers alone.\n\n## Conclusion\n\nWith a named SMTP endpoint and a couple of matchers in `/etc/pve/notifications.cfg`, Proxmox VE 9.1 reliably delivers backup failures, HA fencing events, and replication errors to your inbox or phone — no per-job configuration needed. The twenty-minute setup pays off the first time a 3 AM backup failure lands in your inbox before your users notice. The logical refinement from here: add `filter-severity error` to your webhook matcher so only genuine failures wake you up, and let the daily info-level noise stay in email. If you haven't scheduled automated backup jobs yet, [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) covers everything from retention policies to off-site replication.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-notifications-email-webhooks/",
            "title": "Configure Proxmox Notifications for Email and Webhooks",
            "summary": "Configure Proxmox VE 8.1 notifications to route backup failures, HA events, and storage alerts to your inbox or webhooks like ntfy and Slack in 20 minutes.",
            "date_modified": "2026-04-25T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "proxmox-notifications",
                "smtp",
                "webhooks",
                "ntfy",
                "alerting"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-nfs-storage-vm-disks-backups/",
            "content_html": "\nAdding NFS storage to Proxmox VE takes about five minutes from the web UI, and once attached, every node in your cluster can use the same share for VM disk images, ISO libraries, container templates, and backup archives — no per-node configuration needed. This guide walks through the complete process: exporting a share from TrueNAS SCALE or a Debian NFS server, attaching it in Proxmox VE 9.1, choosing mount options that actually improve performance, and avoiding the permission pitfalls that trip up almost everyone on first attempt.\n\n## Key Takeaways\n\n- **Cluster-wide access**: NFS storage added under Datacenter → Storage is mounted on all cluster nodes automatically — add it once, use it everywhere.\n- **Best fit**: ISO libraries, container templates, and VZDump backup archives — not the primary disk for write-heavy database VMs.\n- **root_squash is the biggest gotcha**: Most NAS devices enable root_squash by default, which blocks Proxmox from writing disk images as root. Disable it on the export.\n- **Performance tip**: Adding `nconnect=4` to mount options delivers a 30–50% throughput increase on 10 GbE without any other changes.\n- **Content types**: A single NFS share can simultaneously serve disk images, ISO files, LXC templates, backups, and cloud-init snippets.\n\n## When NFS Makes Sense for Proxmox Storage\n\nNFS is not the right answer for every storage use case. Being clear about this upfront saves you a painful storage migration later.\n\n**Use NFS when:**\n- You already have a TrueNAS, Synology, or QNAP NAS with spare capacity and a dedicated storage network or 10 GbE link\n- You want a centralized ISO and template library shared across multiple Proxmox nodes without copying files manually to each\n- You need a cost-effective backup target for VZDump archives\n- Your workloads are sequential or low-IOPS — Home Assistant, lightweight web services, media libraries\n\n**Avoid NFS for:**\n- Databases (PostgreSQL, MySQL) or any workload with heavy random 4K I/O — network round-trip latency kills IOPS; use local NVMe or Ceph RBD instead\n- VMs that need fast live migration — NFS migration works, but local NVMe-to-NVMe migration completes in under two minutes for a 50 GB disk; NFS over 1 GbE can take fifteen minutes for the same operation\n- Workloads that depend on ZFS-native send/recv replication — once data is on NFS, those features belong to the NAS, not Proxmox\n\nIf you are designing a full homelab storage layout, [Build a Private Cloud at Home with Proxmox VE](/articles/build-private-cloud-home-proxmox-ve/) covers how to layer NFS alongside local ZFS pools for a balanced setup.\n\n## Setting Up the NFS Export\n\n### TrueNAS SCALE (Dragonfish 24.10 or later)\n\nIn the TrueNAS web UI:\n\n1. Go to **Shares → NFS → Add**\n2. Set the **Path** to your dataset (e.g., `/mnt/tank/proxmox-nfs`)\n3. Under **Advanced Options**, add a network entry for your Proxmox subnet (e.g., `192.168.10.0/24`)\n4. Uncheck **Enable Root Squash** — Proxmox must write as root for disk image operations\n5. Save and confirm the NFS service is running under **Services → NFS**\n\nVerify the export is visible from a Proxmox node:\n\n```bash\nshowmount -e 192.168.10.50\n```\n\nExpected output:\n\n```\nExport list for 192.168.10.50:\n/mnt/tank/proxmox-nfs 192.168.10.0/24\n```\n\n### Debian 12 or Ubuntu NFS Server\n\nIf you are running a DIY NFS server:\n\n```bash\napt install nfs-kernel-server\n```\n\nEdit `/etc/exports`:\n\n```bash\n/srv/proxmox-nfs 192.168.10.0/24(rw,sync,no_root_squash,no_subtree_check)\n```\n\nApply the changes:\n\n```bash\nexportfs -rav\nsystemctl restart nfs-kernel-server\n```\n\nAlways include `no_subtree_check` — it eliminates a performance-killing consistency check that fires on every file access when subtree checking is enabled.\n\nFor production environments, put NFS traffic on a dedicated storage VLAN to isolate backup and ISO transfer load from your management network. [Configuring VLANs on Proxmox with Linux Bridges](/articles/configure-vlans-proxmox-linux-bridges/) covers that setup in full if you have not done it yet.\n\n## How to Add NFS Storage in the Proxmox Web UI\n\n1. Log into the Proxmox web UI at `https://<node-ip>:8006`\n2. Navigate to **Datacenter → Storage → Add → NFS**\n3. Fill in the form:\n   - **ID**: A short identifier with no spaces (e.g., `nas-proxmox`)\n   - **Server**: The NAS IP or hostname (e.g., `192.168.10.50`)\n   - **Export**: Click the dropdown — Proxmox runs `showmount` against the server and lists available exports automatically\n   - **Content**: Check all types this share will serve: `Disk image`, `ISO image`, `Container template`, `VZDump backup file`, `Snippets`\n   - **Max Backups**: Set a per-VM retention limit if using this as a VZDump target\n4. Click **Add**\n\nThe share mounts on all cluster nodes within seconds. Check the **Tasks** pane at the bottom of the UI to confirm there are no mount errors before creating any VMs against the new storage.\n\n### What the Content Types Actually Do\n\n| Content Type | File Format | Typical Use |\n|---|---|---|\n| Disk image | `.raw`, `.qcow2` | VM disk images |\n| ISO image | `.iso` | OS install media |\n| Container template | `.tar.zst` | LXC base images |\n| VZDump backup file | `.vma.zst` | VM and container backups |\n| Snippets | YAML/JSON | Cloud-init user-data configs |\n\n## How to Add NFS Storage via the CLI\n\nThe `pvesm` tool is the right approach when scripting Proxmox node setup or managing storage from Ansible:\n\n```bash\npvesm add nfs nas-proxmox \\\n  --server 192.168.10.50 \\\n  --export /mnt/tank/proxmox-nfs \\\n  --content images,iso,vztmpl,backup,snippets \\\n  --options vers=4.1\n```\n\nVerify the storage was added and check capacity:\n\n```bash\npvesm status\n```\n\nTo list the contents of the storage:\n\n```bash\npvesm list nas-proxmox\n```\n\nProxmox writes the storage definition to `/etc/pve/storage.cfg`, which `pmxcfs` replicates to all cluster nodes:\n\n```ini\nnfs: nas-proxmox\n\tpath /mnt/pve/nas-proxmox\n\tserver 192.168.10.50\n\texport /mnt/tank/proxmox-nfs\n\tcontent images,iso,vztmpl,backup,snippets\n\toptions vers=4.1\n```\n\n## NFS Mount Options That Actually Improve Performance\n\nProxmox passes mount options directly through the `options` field. These are the ones worth setting:\n\n```bash\npvesm set nas-proxmox --options vers=4.1,hard,timeo=600,retrans=2,nconnect=4\n```\n\n| Option | Effect | Recommendation |\n|---|---|---|\n| `vers=4.1` | Forces NFSv4.1 with session trunking | Always prefer v4.1 over v3 for cluster use |\n| `hard` | Retries indefinitely if the server becomes unreachable | Required for VM disks — `soft` will corrupt data |\n| `timeo=600` | 60-second timeout before retry (units: 0.1 second) | Increase on networks with occasional latency spikes |\n| `retrans=2` | Retries before reporting an error | 2 is fine on stable LANs; default is 3 |\n| `nconnect=4` | Opens 4 parallel TCP connections to the NFS server | 30–50% throughput increase on 10 GbE (kernel 5.15+) |\n| `noatime` | Skips access-time write on reads | Minor write reduction on ISO and backup shares |\n\n`nconnect=4` is the highest-impact single option if you are on 10 GbE. Benchmark before and after with:\n\n```bash\ndd if=/dev/zero of=/mnt/pve/nas-proxmox/test.img bs=1M count=1024 oflag=direct\nrm /mnt/pve/nas-proxmox/test.img\n```\n\nOn a TrueNAS SCALE system with an NVMe-backed pool and a direct 10 GbE connection, `nconnect=4` typically moves sequential write throughput from around 350 MB/s to 700–900 MB/s. On 1 GbE, the difference is negligible — the link is already saturated long before the connection count matters.\n\n## Using NFS as a Proxmox Backup Target\n\nOnce NFS storage is added with the `backup` content type enabled, pointing a backup job at it is straightforward:\n\n1. Go to **Datacenter → Backup → Add**\n2. Set **Storage** to `nas-proxmox`\n3. Choose your **Schedule** (daily, weekly, or a cron expression)\n4. Configure **Retention** (keep last N backups per VM)\n5. Save — Proxmox handles the rest, writing `.vma.zst` archives directly to the NFS share\n\nFor deduplication, encryption, and server-side integrity verification, [Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) is the better tool — it can also use an NFS share as its datastore backing, though local NVMe or a ZFS dataset gives better PBS performance for dedup index operations.\n\nA practical combination that works well: VZDump to NFS for rapid daily snapshots, and PBS on a separate host for deduplicated, encrypted long-term retention with offsite replication.\n\n## Common NFS Gotchas on Proxmox\n\n### root_squash Blocks Disk Image Creation\n\nThis is the most common first-time issue. If you get a `Permission denied` error when creating a VM disk on NFS storage, the export is almost certainly using `root_squash`. Proxmox writes disk images as root; `root_squash` maps root to `nobody`, which has no write access.\n\nFix on TrueNAS: uncheck **Enable Root Squash** in the NFS share settings.\nFix on Linux: change `root_squash` to `no_root_squash` in `/etc/exports` and run:\n\n```bash\nexportfs -ra\n```\n\n### NFSv3 File Locking in a Cluster\n\nNFSv3 uses a separate statd/lockd protocol for file locking. Under high concurrency — two Proxmox nodes creating disk images simultaneously — stale lock files can accumulate and cause `flock` failures. NFSv4.1 (`vers=4.1`) handles locking natively and eliminates this class of problem entirely.\n\n### NFS Storage Shows as Unavailable After a NAS Reboot\n\nProxmox marks NFS storage unavailable if the mount times out during node boot. After the NAS comes back online, re-trigger the mount:\n\n```bash\nmount /mnt/pve/nas-proxmox\n```\n\nOr rescan via the UI: **Datacenter → Storage → nas-proxmox → More → Rescan**.\n\nIf your NAS boots slower than your Proxmox nodes, add `_netdev` to the mount options so the kernel waits for network availability before attempting the mount.\n\n### QCOW2 and LXC Container Root Filesystems\n\nQCOW2 disk images on NFS work fine for KVM VMs, but LXC containers cannot use QCOW2 for their root filesystems — LXC needs raw block devices or directory-backed storage. If you want LXC container data on NFS, use the `dir` storage plugin pointing at the NFS mountpoint rather than the native `nfs` plugin. This distinction is worth knowing before you try to migrate a container and get an unexpected error.\n\n### ISO Upload Permissions\n\nWhen uploading an ISO through the web UI, Proxmox writes it as `www-data` (UID 33). If you also mount the same NFS share from another client and try to delete or modify those files directly, you will hit permission errors. The clean rule: manage ISO files exclusively through the Proxmox UI or `pvesm` — do not mix access methods on the same export.\n\n## Conclusion\n\nNFS is the lowest-friction way to give every Proxmox cluster node shared access to ISO libraries, container templates, and VZDump archives without deploying Ceph. Add the storage once at the Datacenter level, set `no_root_squash` on the export, and include `vers=4.1,hard,nconnect=4` in your mount options for solid performance on modern hardware. The natural next step is setting up a scheduled VZDump job targeting this storage, then layering [Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) on top for deduplication and encryption.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-nfs-storage-vm-disks-backups/",
            "title": "Add NFS Storage to Proxmox for VM Disks and Backups",
            "summary": "Add NFS storage to Proxmox VE for VM disk images, ISO libraries, and VZDump backups. Covers TrueNAS export config, mount options, and the root_squash permission fix.",
            "date_modified": "2026-04-24T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "nfs",
                "storage",
                "truenas",
                "backup",
                "cluster-storage"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/migrate-hyper-v-vms-proxmox-ve/",
            "content_html": "\nMigrating from Hyper-V to Proxmox doesn't require starting from scratch. Export your VMs from Hyper-V, convert the VHDX disk images with `qemu-img`, import them using `qm importdisk`, and boot. Most Linux guests come up on the first try; Windows VMs need VirtIO drivers and occasionally a licensing touchup. By the end of this guide you'll have your first Hyper-V workload running on Proxmox VE 9 with no data loss and no OS reinstall.\n\n## Key Takeaways\n\n- **Export format**: Hyper-V uses VHDX; Proxmox works with raw or qcow2 — `qemu-img convert` handles the translation.\n- **Linux guests**: Boot cleanly with no driver changes; switching to virtio-net is optional but improves throughput.\n- **Windows guests**: Require VirtIO drivers post-import; use the Fedora VirtIO ISO to install them inside the running guest.\n- **Snapshots**: Merge all Hyper-V checkpoints before export — the avhd/avhdx chain will break the VHDX if left intact.\n- **Generation 2 VMs**: Use UEFI; match the firmware type when creating the Proxmox VM shell or the bootloader won't find the disk.\n\n## Why Hyper-V Admins Are Moving to Proxmox\n\nThe calculus has shifted. Running Hyper-V on bare metal still requires a Windows Server host OS, which means CALs, activation, and Windows Update on your hypervisor. Proxmox VE runs Debian under the hood, has no per-VM licensing, and gives you KVM virtualization plus LXC containers from a single web UI — no Microsoft dependency anywhere in the stack.\n\nFor mixed workloads, the integration model is tighter too. Running containers directly alongside VMs without nested virtualization is a significant operational improvement. [Running Docker Inside LXC Containers on Proxmox](/articles/docker-inside-lxc-containers-proxmox/) shows what that looks like in practice, and it's one of the first things you'll want to set up after your VMs are migrated.\n\n## What You Need Before You Start\n\nBefore touching anything in production, confirm you have:\n\n- **Proxmox VE 9** installed and reachable. If you're starting fresh, [How to Install Proxmox VE on Any Hardware](/articles/install-proxmox-ve-on-any-hardware/) covers the full install from ISO to first login.\n- **Disk space for exports**: Hyper-V allocates full VHDX capacity on export — a 500 GB dynamic disk can export as up to 500 GB even if only 80 GB is used. Plan accordingly.\n- **`qemu-img`** on a Linux machine (or WSL2 on Windows). On Proxmox itself, it's pre-installed. On Debian/Ubuntu: `sudo apt install qemu-utils`.\n- **A file transfer path** from your Hyper-V host to the Proxmox node — SSH/scp, a shared NFS mount, or an external USB drive all work.\n- **The VirtIO ISO** if you're migrating Windows guests. Download details are below.\n\n## How to Export VMs from Hyper-V\n\n### Merge Snapshots First\n\nThis is the step most people skip and then regret. If your VM has checkpoints, they live as a chain of `.avhd` or `.avhdx` differencing disks alongside the base `.vhdx`. Exporting without merging produces an incomplete or corrupted image.\n\nIn Hyper-V Manager: right-click the VM → **Checkpoints** → delete all of them. Hyper-V merges the chain automatically when you delete — give it a few minutes per checkpoint on large disks. Once the checkpoint tree is empty, proceed with export.\n\n### Export via Hyper-V Manager\n\nRight-click the VM → **Export** → choose a destination folder. Hyper-V writes a folder structure with:\n\n- `Virtual Machines/` — the VM config XML\n- `Virtual Hard Disks/` — the `.vhdx` disk files\n- `Snapshots/` — should be empty after the merge step\n\n### Export via PowerShell\n\n```powershell\nExport-VM -Name \"web-server-01\" -Path \"D:\\HyperV-Exports\\web-server-01\"\n```\n\nFor bulk exports across all VMs on the host:\n\n```powershell\nGet-VM | Export-VM -Path \"D:\\HyperV-Exports\"\n```\n\nThe export pauses disk I/O briefly for consistency. I recommend a clean shutdown for planned migrations rather than a live export — dirty exports are for disaster recovery, not deliberate moves.\n\n## Converting VHDX Disks to qcow2\n\nProxmox natively imports raw and qcow2 disk formats. qcow2 is the better choice: it supports thin provisioning (empty sectors don't waste space), snapshots, and live Proxmox Backup Server backups.\n\n### Convert on the Proxmox Node Directly\n\nCopy the VHDX file to the Proxmox node first:\n\n```bash\nscp \"user@hyperv-host:D:/HyperV-Exports/web-server-01/Virtual Hard Disks/web-server-01.vhdx\" \\\n  root@proxmox:/tmp/\n```\n\nThen convert it on the node:\n\n```bash\nqemu-img convert -f vhdx -O qcow2 -p /tmp/web-server-01.vhdx /tmp/web-server-01.qcow2\n```\n\nThe `-p` flag shows a progress bar. A 100 GB VHDX with 60 GB of actual data takes around 3–5 minutes on a SATA SSD. NVMe-to-NVMe cuts that to under two minutes. Verify the result:\n\n```bash\nqemu-img info /tmp/web-server-01.qcow2\n```\n\nYou'll see `virtual size` (declared disk size) and `disk size` (actual space on disk) — the latter should match your used data, not the full VHDX allocation.\n\n### Convert on Windows via WSL2\n\nIf moving the VHDX to Linux first isn't practical, WSL2 runs `qemu-img` directly:\n\n```bash\n# Inside WSL2 (Ubuntu 24.04)\nsudo apt update && sudo apt install -y qemu-utils\n\nqemu-img convert -f vhdx -O qcow2 -p \\\n  \"/mnt/d/HyperV-Exports/web-server-01/Virtual Hard Disks/web-server-01.vhdx\" \\\n  /tmp/web-server-01.qcow2\n```\n\nWSL2's I/O translation layer adds roughly 30–40% more time compared to native Linux. Fine for a one-time migration, slow for ten VMs.\n\n## How to Import the VM into Proxmox\n\n### Step 1: Create the VM Shell\n\nIn the Proxmox web UI, create a new VM and **uncheck \"Add disk\"** during the wizard — you're importing a disk, not creating one. Set these fields based on the Hyper-V generation:\n\n| Setting | Linux Guest | Windows Gen 1 | Windows Gen 2 |\n|---|---|---|---|\n| OS Type | Linux 6.x | Windows 11/2022 | Windows 11/2022 |\n| Machine type | q35 | q35 | q35 |\n| BIOS | SeaBIOS | SeaBIOS | OVMF (UEFI) |\n| SCSI controller | VirtIO SCSI | VirtIO SCSI | VirtIO SCSI |\n| EFI disk | No | No | Yes |\n\nFor UEFI (Gen 2) Windows VMs, also check **Add EFI disk** — Proxmox needs this to store NVRAM variables including Secure Boot state. Note the VM ID the wizard assigns; we'll call it `101`.\n\n### Step 2: Import the Disk\n\n```bash\nqm importdisk 101 /tmp/web-server-01.qcow2 local-lvm --format qcow2\n```\n\nReplace `local-lvm` with your actual storage pool name. Check what's available:\n\n```bash\npvesm status\n```\n\nOn a ZFS-based setup, use `local-zfs`. After the import completes you'll see output like:\n\n```\nSuccessfully imported disk as 'unused0:local-lvm:vm-101-disk-0'\n```\n\n### Step 3: Attach the Disk and Set Boot Order\n\nIn the web UI: **VM 101 → Hardware → unused0** → click **Edit** → set the bus:\n\n- **VirtIO Block** for Linux guests\n- **SCSI** (with the VirtIO SCSI controller already configured) for Windows guests\n\nThen configure boot order: **Options → Boot Order** → enable the new disk and move it to first position.\n\n### Step 4: Add a Network Adapter\n\nHyper-V virtual NICs don't carry over. Add one in **Hardware → Add → Network Device**:\n\n- **VirtIO (paravirtualized)** for Linux guests\n- **E1000** for Windows guests *initially* — switch to VirtIO after installing drivers inside the guest\n\nAssign it to the correct Proxmox bridge (`vmbr0` for your LAN, or whichever bridge serves that network).\n\n## Fixing Windows Guests After Import\n\n### The BSOD Problem and How to Avoid It\n\nIf you import with a VirtIO disk and boot without drivers, Windows will bluescreen immediately with `INACCESSIBLE_BOOT_DEVICE`. Two ways around it:\n\n**Safe path**: Import the disk as SCSI with the `lsi` controller type. Windows has inbox drivers for it, so the VM boots. Install VirtIO drivers from inside the running guest, then switch the disk and NIC to VirtIO.\n\n**Fast path**: Mount the VirtIO ISO as a second CD-ROM before the first boot. When Windows hits the BSOD and reboots into WinRE, use the recovery console to load the VirtIO storage driver from the ISO, then boot normally.\n\nDownload the VirtIO ISO directly to Proxmox:\n\n```bash\nwget -P /var/lib/vz/template/iso/ \\\n  https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso\n```\n\n### Installing VirtIO Drivers Inside the Guest\n\nOnce Windows is running, open **Device Manager** and point it at the mounted ISO. Install from these subdirectories (adjust for your Windows version):\n\n```\nvioscsi\\w11\\amd64\\    → VirtIO SCSI storage driver (Windows 11)\nNetKVM\\w11\\amd64\\     → VirtIO network adapter\nBalloon\\w11\\amd64\\    → Memory balloon driver\nqxl\\w11\\amd64\\        → QXL display (optional, improves VNC performance)\n```\n\nFor Windows Server 2022, use `2k22\\amd64` in place of `w11\\amd64`. After installing the storage and network drivers, shut down the VM, change the disk bus to **VirtIO Block** in Proxmox hardware, and change the NIC to **VirtIO**. It will come up on VirtIO from that point forward.\n\n### Windows Activation After Migration\n\nExpect deactivation. The virtual BIOS fingerprint changed when you moved from Hyper-V to KVM, and Windows ties its activation state to that fingerprint. For volume or MAK licenses:\n\n```powershell\nslmgr /ato\n```\n\nFor UEFI OEM keys embedded in physical hardware, the key is tied to that machine's firmware — it won't transfer to a VM. Plan for this before migration day: either apply a volume key, set up a KMS server, or purchase a new license for the migrated instance.\n\n## Networking and Storage Equivalents\n\nHyper-V virtual switches don't map directly to Proxmox, but the translation is straightforward:\n\n| Hyper-V | Proxmox Equivalent |\n|---|---|\n| External virtual switch | Linux bridge (`vmbr0`) with physical NIC uplink |\n| Internal virtual switch | Linux bridge without uplink |\n| Private virtual switch | Isolated Linux bridge, no uplink |\n| VLAN tagging on NIC | VLAN tag field on the VM network device |\n| Storage Spaces mirror | ZFS mirror pool or LVM-thin |\n| Hyper-V Replica | Proxmox Backup Server replication |\n\nIf your Hyper-V VMs ran on VLANs, you'll need to recreate the VLAN config on Proxmox bridges before the VMs come online — otherwise the guests will boot into a black-hole network segment. [Configuring VLANs on Proxmox with Linux Bridges](/articles/configure-vlans-proxmox-linux-bridges/) has the full bridge and VLAN tag configuration walkthrough.\n\n## Common Pitfalls\n\n**Secure Boot violations**: Gen 2 Hyper-V VMs use Secure Boot. Proxmox's OVMF supports it but leaves it disabled by default. If a Windows VM won't boot and shows a Secure Boot error in the VNC console, press **Del** at the OVMF splash screen, navigate to **Device Manager → Secure Boot Configuration**, and either disable Secure Boot or enroll the Microsoft certificate database.\n\n**Dynamic Memory disappears**: Hyper-V's Dynamic Memory doesn't automatically translate to KVM's balloon driver. After installing VirtIO drivers, enable the balloon device in Proxmox (**Hardware → Add → VirtIO Balloon**). Without it, whatever RAM you set at VM creation is fixed — the guest can't give memory back.\n\n**Time sync drift**: Hyper-V uses its own enlightenment-based time sync, which disappears after migration. Windows guests will fall back to Windows Time Service syncing from the host. Make sure your Proxmox node is configured to sync from a reliable NTP source so guests inherit accurate time.\n\n**Generation 1 vs Generation 2 mismatch**: Gen 1 is MBR + legacy BIOS. Gen 2 is GPT + UEFI. If you create the Proxmox VM shell with SeaBIOS but the original was Gen 2, the bootloader won't find the disk. Check the Hyper-V VM's **Firmware** settings before you export — it's listed right there.\n\n## Validating Before Cutover\n\nDon't update DNS or decommission the Hyper-V VM until you've confirmed:\n\n- The VM boots to login without errors in the VNC console\n- Network works: ping the gateway, ping `1.1.1.1`, test internal DNS\n- Application checks pass: database responds, web service returns HTTP 200, scheduled tasks run\n- Backup is configured and tested — [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) shows how to schedule and verify backups before you call the migration done\n\nKeep the Hyper-V export on disk for at least one week after cutover. Disk space is cheap; an emergency rollback that takes 10 minutes beats one that takes 10 hours.\n\n## Conclusion\n\nThe migration from Hyper-V to Proxmox comes down to three commands — `Export-VM`, `qemu-img convert`, and `qm importdisk` — with most of the elapsed time spent on disk I/O rather than configuration. Linux VMs typically just boot; Windows VMs need an extra 30 minutes for VirtIO drivers and a licensing check. Once the first workload is running on Proxmox, set up Proxmox Backup Server for the migrated VMs before moving the next one — that's the right order of operations, not an afterthought.\n",
            "url": "https://proxmoxpulse.com/articles/migrate-hyper-v-vms-proxmox-ve/",
            "title": "Migrate Hyper-V VMs to Proxmox VE Step by Step",
            "summary": "Export Hyper-V VMs, convert VHDX to qcow2, and import into Proxmox VE 9 without reinstalling. Covers VirtIO drivers, Gen 2 UEFI VMs, and Windows activation.",
            "date_modified": "2026-04-23T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "hyper-v",
                "vm-migration",
                "vhdx",
                "qemu-img",
                "virtio"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-lets-encrypt-acme-certificate/",
            "content_html": "\nThe browser \"Your connection is not private\" warning on your Proxmox web UI is more than visual friction — every modern browser suppresses password autofill on untrusted origins, and if you're accessing your node through Tailscale or a reverse proxy, broken certificate validation creates real operational headaches. Proxmox VE 9.x ships a full ACME client built into both the web UI and CLI. In under 15 minutes, you can have a free, auto-renewing Let's Encrypt certificate on any Proxmox node — including homelab hosts with no public IP — using the DNS-01 challenge. This guide walks through the setup with Cloudflare as the DNS provider, but the same steps apply to AWS Route 53, Hetzner, DigitalOcean, and the 30+ other providers Proxmox ships plugins for.\n\n## Key Takeaways\n\n- **Built-in ACME client**: No certbot or acme.sh needed — Proxmox VE has included its own ACME client since version 6.2.\n- **DNS-01 for homelabs**: If your Proxmox host isn't publicly reachable on port 80, DNS-01 proves domain ownership through your DNS provider's API instead.\n- **Scoped API token**: Create a Cloudflare token with `Zone:DNS:Edit` permission only — do not use the global API key.\n- **Auto-renewal**: The `pve-daily-update.timer` systemd unit renews certificates automatically when fewer than 30 days remain.\n- **Per-node in clusters**: Each cluster node needs its own ACME configuration — there is no cluster-wide certificate push.\n\n## Why the Default Self-Signed Certificate Is a Real Problem\n\nProxmox generates a self-signed certificate at install time using a local CA. Every browser flags it as untrusted, which means:\n\n- Chrome and Firefox suppress password autofill on `https://` pages with cert errors\n- Browser extensions and API clients refuse connections that fail certificate validation\n- You train yourself to click through security warnings — exactly the reflex that [Proxmox firewall and SSH hardening](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) is designed to eliminate\n\nA valid certificate fixes all of this and costs nothing beyond owning a domain.\n\n## Prerequisites\n\nBefore starting, confirm three things:\n\n1. You own a domain managed through a supported DNS provider (list available at `Datacenter > ACME > Challenge Plugins`)\n2. Your Proxmox node has outbound internet access to `acme-v02.api.letsencrypt.org` on port 443\n3. Your node's hostname is a fully qualified domain name\n\nCheck the current hostname:\n\n```bash\nhostname --fqdn\n```\n\nIf the result is just `pve` with no domain suffix, fix it before proceeding. The [Proxmox installation guide](/articles/install-proxmox-ve-on-any-hardware/) covers hostname configuration as part of its initial setup checklist, but the quick fix is:\n\n```bash\nhostnamectl set-hostname pve.yourdomain.com\n```\n\nThen update `/etc/hosts` so the node's IP maps to the FQDN:\n\n```ini\n192.168.1.10  pve.yourdomain.com pve\n```\n\n## How to Register an ACME Account\n\nProxmox's ACME client needs a Let's Encrypt account to issue certificates. Register once per node.\n\n**Via the Web UI**:\n\n1. Navigate to `Datacenter > ACME`\n2. Under \"Accounts\", click **Add**\n3. Enter your email address and accept the Terms of Service\n4. Select the **Staging** directory first — it has no rate limits and lets you validate the full flow without burning production quota\n\n**Via the CLI**:\n\n```bash\n# Staging — use this first\npvenode acme account register staging your@email.com \\\n  --directory https://acme-staging-v02.api.letsencrypt.org/directory\n\n# Production — switch to this after staging succeeds\npvenode acme account register default your@email.com \\\n  --directory https://acme-v02.api.letsencrypt.org/directory\n```\n\nList registered accounts at any point:\n\n```bash\npvenode acme account list\n```\n\n## HTTP-01 vs DNS-01: Which Challenge Type to Use\n\n| Challenge | Requirement | Best for |\n|-----------|-------------|----------|\n| HTTP-01 | Port 80 publicly reachable on your domain's IP | Public-facing hosts |\n| DNS-01 | API access to your DNS provider | Homelabs, private IPs, wildcards |\n| TLS-ALPN-01 | Port 443 publicly reachable | Rarely needed with Proxmox |\n\nFor a typical homelab Proxmox node, DNS-01 is the right call. Your host stays completely internal — only the Proxmox node needs outbound HTTPS access to Let's Encrypt's servers and your DNS provider's API. No inbound port forwarding required.\n\n## Setting Up the Cloudflare DNS Plugin\n\nFirst, create a scoped API token in Cloudflare:\n\n1. Log into Cloudflare > **My Profile > API Tokens > Create Token**\n2. Select the **Edit zone DNS** template\n3. Scope it to your specific zone (domain) — not \"All zones\"\n4. Copy the token immediately — it won't be shown again\n\nAdd the plugin in Proxmox:\n\n**Web UI**: `Datacenter > ACME > Challenge Plugins > Add`\n- Plugin ID: `cloudflare`\n- Plugin type: `Cloudflare Managed DNS`\n- API Token: paste your token\n\n**CLI**:\n\n```bash\npvenode acme plugin add dns cloudflare \\\n  --api cf \\\n  --data \"CF_Token=your_cloudflare_api_token_here\"\n```\n\nVerify the plugin was saved:\n\n```bash\npvenode acme plugin list\n```\n\n## Configuring the Domain on Your Node\n\nAttach a domain and the DNS plugin to your node's certificate configuration.\n\n**Web UI**: `Node > Certificates > ACME > Add`\n- Domain: `pve.yourdomain.com`\n- Challenge type: `DNS`\n- Plugin: `cloudflare`\n\n**CLI**:\n\n```bash\n# Set the ACME account for this node\npvenode config set --acme \"account=default\"\n\n# Set the domain with the DNS plugin\npvenode config set --acmedomain0 \"pve.yourdomain.com,plugin=cloudflare\"\n```\n\nVerify the config was written correctly:\n\n```bash\ngrep -E \"^acme\" /etc/pve/nodes/$(hostname)/config\n```\n\nExpected output:\n\n```ini\nacme: account=default\nacmedomain0: pve.yourdomain.com,plugin=cloudflare\n```\n\n## Ordering the Certificate\n\nWith the account and domain configured, request the certificate:\n\n**Web UI**: `Node > Certificates > ACME` > **Order Certificates Now**\n\nA task log shows the challenge flow in real time. DNS-01 validation against Cloudflare typically takes 30-60 seconds — Cloudflare's API propagation is fast enough that the challenge succeeds on the first validation attempt.\n\n**CLI**:\n\n```bash\npvenode acme cert order\n```\n\nIf you're using the staging account, the certificate issuer will be \"Fake LE Intermediate X1\" — that's correct. Once staging succeeds, switch to the production account and re-order:\n\n```bash\npvenode config set --acme \"account=default\"\npvenode acme cert order --force\n```\n\nAfter a successful order, the Proxmox web UI immediately starts serving the new certificate. Reload your browser — the padlock should now show a valid issuer.\n\n## How Auto-Renewal Works\n\nProxmox does not use a separate cron job for certificate renewal. The `pve-daily-update.service` systemd unit runs once per day and checks whether any node certificates expire within 30 days. If they do, it renews automatically via the same ACME config.\n\nCheck the timer status:\n\n```bash\nsystemctl status pve-daily-update.timer\n```\n\nTrigger a manual renewal check:\n\n```bash\npvenode acme cert renew\n```\n\nIf a renewal fails silently, check the update log:\n\n```bash\ngrep -i acme /var/log/pveupdate.log | tail -20\n```\n\nExpect to see lines confirming the renewal check ran and either skipped (cert still valid) or completed successfully.\n\n## Gotchas and Pitfalls From Real Use\n\n**Rate limits hit fast during testing**: Let's Encrypt's production CA allows 5 failed certificate orders per domain per hour. If your plugin config is wrong and you retry quickly, you'll burn the limit. Always test with the staging CA first — it has no rate limits.\n\n**DNS propagation timing**: The Cloudflare plugin inserts the TXT record and then waits before signaling ACME to validate. Cloudflare is typically 5-15 seconds. Slower DNS providers can take 2-5 minutes, and the Proxmox ACME client's built-in propagation wait may time out before slower providers finish. If validation fails consistently with a non-Cloudflare provider, look for a `sleep` or `propagation_seconds` setting in its plugin script under `/usr/share/proxmox-acme/dnsapi/`.\n\n**Wildcard certificates require DNS-01**: Let's Encrypt issues wildcard certs only via DNS-01. To get `*.yourdomain.com`, set:\n\n```bash\npvenode config set --acmedomain0 \"*.yourdomain.com,plugin=cloudflare\"\n```\n\nNote that `*.yourdomain.com` covers `pve.yourdomain.com` but not the apex `yourdomain.com`. Add a second domain entry (`--acmedomain1`) for the apex if you need it.\n\n**Cluster nodes each need separate certs**: In a three-node cluster, configure ACME independently on `pve1`, `pve2`, and `pve3`. Each node's config lives at `/etc/pve/nodes/<nodename>/config`. There is no mechanism to push a certificate from the cluster view. Once you've invested the time setting up a [full Proxmox private cloud](/articles/build-private-cloud-home-proxmox-ve/), budget an extra 10 minutes per additional node for certificate setup.\n\n**Port 8006 vs port 80**: HTTP-01 challenge needs port 80 forwarded to the Proxmox host — not port 8006. Forwarding 80 → 8006 via NAT will fail the challenge because the ACME token is served on port 80, not the web UI port.\n\n## Verifying the Certificate\n\nAfter ordering, confirm the cert details:\n\n**Web UI**: `Node > Certificates` — you'll see the Let's Encrypt cert listed alongside the original self-signed CA cert.\n\n**CLI**:\n\n```bash\nopenssl x509 \\\n  -in /etc/pve/local/pveproxy-ssl.pem \\\n  -text -noout \\\n  | grep -E \"(Subject:|Issuer:|Not After)\"\n```\n\nExpected output:\n\n```\nSubject: CN=pve.yourdomain.com\nIssuer: C=US, O=Let's Encrypt, CN=R10\nNot After : Jul 23 12:00:00 2026 GMT\n```\n\nThe certificate is valid for 90 days. Proxmox renews it when fewer than 30 days remain, so in steady state you should never see expiry-related downtime.\n\n## Using a Different DNS Provider\n\nProxmox ships ACME DNS plugins for 30+ providers. List all available APIs:\n\n```bash\nls /usr/share/proxmox-acme/dnsapi/\n```\n\nThe naming convention: `dns_cf.sh` → pass `--api cf` to `pvenode acme plugin add dns`. For AWS Route 53:\n\n```bash\npvenode acme plugin add dns myroute53 \\\n  --api route53 \\\n  --data \"AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\"\n```\n\nIf your DNS provider isn't in the list, the `acme-dns` delegation approach works universally — you create a CNAME from `_acme-challenge.yourdomain.com` to a subdomain on a separate acme-dns server you control. More setup, but compatible with any registrar.\n\n## Conclusion\n\nSetting up Let's Encrypt on Proxmox takes about 15 minutes and permanently eliminates the certificate warning on your management interface. Register a staging account, add your DNS plugin, configure the domain, confirm the staging cert orders cleanly, then switch to production — the systemd timer handles every renewal from that point forward. This pairs directly with the steps in the [Proxmox firewall, fail2ban, and SSH hardening guide](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) to give your management interface a properly secured baseline from day one.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-lets-encrypt-acme-certificate/",
            "title": "Proxmox Let's Encrypt ACME Certificate Setup Guide",
            "summary": "Set up free Let's Encrypt SSL certificates on Proxmox VE using the built-in ACME client. Works for homelab hosts using DNS challenge — no public IP needed.",
            "date_modified": "2026-04-22T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "lets-encrypt",
                "acme",
                "ssl",
                "certificates",
                "dns-challenge"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-high-availability-vm-failover/",
            "content_html": "\nProxmox High Availability Manager restarts your VMs automatically on a surviving node within about 60-90 seconds of detecting a node failure — no manual intervention, no SSH session at 3am. By the end of this guide, you'll have a working HA cluster with properly configured fencing, HA groups, and a tested failover. I'm running this on a three-node Proxmox VE 9.1 cluster with Ceph shared storage, but the procedure is identical for iSCSI or NFS-backed clusters.\n\n## Key Takeaways\n\n- **3 nodes minimum**: Two-node clusters can't maintain quorum after a single failure — HA needs a majority vote to proceed.\n- **Fencing is mandatory**: Without a working watchdog or IPMI fence agent, Proxmox HA will refuse to restart VMs to avoid split-brain data corruption.\n- **Shared storage required**: VMs must live on storage accessible from all nodes — Ceph, iSCSI, NFS, or shared ZFS over FC.\n- **Recovery takes 60-90 seconds**: The delay is deliberate — Proxmox waits for fencing confirmation before restarting anything.\n- **Test with a hard power-off**: A graceful shutdown doesn't replicate a real failure scenario.\n\n## How Proxmox HA Actually Works\n\nProxmox HA runs on two daemons: `pve-ha-lrm` (Local Resource Manager, one per node) and `pve-ha-crm` (Cluster Resource Manager, one elected leader per cluster). The CRM watches resource states; the LRM executes commands on its local node.\n\nWhen a node goes down, the sequence is:\n\n1. Corosync marks the node unreachable after missed heartbeats.\n2. The CRM waits for **fencing confirmation** — either a watchdog reset or an IPMI power-cycle that proves the failed node is genuinely off.\n3. Once fenced, the CRM issues relocate or restart commands for all HA-managed VMs.\n4. The LRM on a surviving node starts each VM from the shared storage pool.\n\nStep 2 is where most misconfigured HA setups stall. Without fencing, the CRM correctly refuses to restart VMs — the original node might still be running and holding disk locks, and starting a second instance would corrupt the VM's filesystem.\n\n### Why the Three-Node Minimum Matters\n\nCorosync requires a majority (quorum) to operate. With two nodes, losing one leaves you at exactly 50% — no majority, cluster services halt. With three nodes, losing one leaves you at 66% — quorum maintained, HA proceeds normally.\n\nYou can work around a two-node cluster with a lightweight `qdevice` (a tie-breaker service running on something like a Raspberry Pi), but three nodes is the cleaner path. If you're starting from scratch, the guide on [building a private Proxmox cloud at home](/articles/build-private-cloud-home-proxmox-ve/) walks through the full multi-node cluster setup prerequisites.\n\n## What You Need Before Enabling HA\n\nCheck all of these before touching the HA configuration panel. Missing any one of them produces an HA setup that *looks* active but silently fails when you actually need it.\n\n**Cluster:**\n- Three or more PVE 9.1 nodes in the same cluster\n- Corosync heartbeat latency under 5ms — use a dedicated cluster NIC if you can\n- Synchronized time on all nodes: run `chronyc tracking` and confirm offset under 100ms\n\n**Storage:**\n- Target VMs must use shared storage: Ceph RBD, iSCSI, NFS, or Fibre Channel\n- Local storage (`local-lvm`, `local-zfs`) silently disqualifies a VM from HA eligibility\n\n**Fencing:**\n- A hardware watchdog device at `/dev/watchdog` or `/dev/watchdog0`\n- Or IPMI/iDRAC/iLO configured as a fence agent with tested, working credentials\n\nVerify your watchdog device is present:\n\n```bash\nls /dev/watchdog*\n```\n\nIf nothing appears, load the software fallback as a stopgap (acceptable for testing, not for production):\n\n```bash\nmodprobe softdog\necho \"softdog\" >> /etc/modules\n```\n\n## Configure the Hardware Watchdog with watchdog-mux\n\nProxmox ships `watchdog-mux`, a daemon that multiplexes the watchdog device so multiple HA processes can share it safely. It must be running on every cluster node.\n\nCheck and enable it:\n\n```bash\nsystemctl status watchdog-mux\nsystemctl enable --now watchdog-mux\n```\n\nVerify the LRM connected to it:\n\n```bash\njournalctl -u pve-ha-lrm --since \"5 minutes ago\" | grep -i watchdog\n```\n\nYou should see a line confirming the LRM opened `/run/watchdog-mux.sock`. Errors here mean fencing is broken and recovery will hang indefinitely.\n\nThe watchdog timeout is configurable:\n\n```ini\n# /etc/default/pve-ha-manager\nHA_WATCHDOG_TIMEOUT=60\n```\n\nThe 60-second default is appropriate for most setups. Shorter values increase sensitivity to transient network blips; longer values delay recovery.\n\n### Setting Up IPMI Fencing for Bare-Metal Nodes\n\nFor bare-metal servers with IPMI — which covers most enterprise hardware and many homelab boards — IPMI fencing is more reliable than a software watchdog alone. It gives you hard power control even when the OS is completely unresponsive.\n\nInstall the fence agents package on all nodes:\n\n```bash\napt install fence-agents\n```\n\nTest your BMC credentials before configuring anything:\n\n```bash\nfence_ipmilan -a 192.168.1.52 -l admin -p yourpassword -o status\n```\n\nExpected output: `Status: ON`. If this fails, fix IPMI access first — there is no point configuring HA fencing around a broken BMC connection. While you're securing IPMI access, make sure it's restricted to your management VLAN; the [Proxmox hardening guide](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) has practical firewall rules for exactly this scenario.\n\nConfigure the fence agent per-node via the Proxmox API:\n\n```bash\npvesh set /nodes/pve2/config \\\n  --fence-plugin ipmilan \\\n  --fence-ipmi-ip 192.168.1.52 \\\n  --fence-ipmi-user admin \\\n  --fence-ipmi-password yourpassword\n```\n\n## How to Create HA Groups and Enroll VMs\n\n### Create an HA Group\n\nHA groups control which nodes are eligible to run a set of VMs and in what priority order. Navigate to **Datacenter → HA → Groups → Add**, or use the API:\n\n```bash\npvesh create /cluster/ha/groups \\\n  --group critical-vms \\\n  --nodes \"pve1:3,pve2:2,pve3:1\"\n```\n\nThe trailing number is priority — higher wins. Equal priority means Proxmox picks the surviving node arbitrarily.\n\n| Option | Effect |\n|--------|--------|\n| `restricted` | VMs only ever run on nodes listed in this group |\n| `nofailback` | VMs don't migrate back when the preferred node recovers |\n| Node priority | Determines which surviving node receives the VM first |\n\nStart with one group containing all nodes at equal priority. Tune after watching real failovers.\n\n### Add VMs and Containers to the HA Group\n\nIn the web UI: select a VM, click **More → Manage HA**. Or with the CLI:\n\n```bash\npvesh create /cluster/ha/resources \\\n  --sid vm:101 \\\n  --group critical-vms \\\n  --state started \\\n  --max_restart 3 \\\n  --max_relocate 3\n```\n\n- `--state started`: the desired state HA will actively maintain\n- `--max_restart`: restart attempts on the current node before escalating to relocation\n- `--max_relocate`: relocation attempts across nodes before marking the resource failed\n\nLXC containers use `--sid ct:102`. Confirm all enrolled resources:\n\n```bash\npvesh get /cluster/ha/resources\n```\n\nBefore adding a VM, always verify its disk is on shared storage:\n\n```bash\nqm config 101 | grep -E \"^(scsi|virtio|ide|sata)\"\n# You want output like:\n# scsi0: ceph-pool:vm-101-disk-0,size=32G\n# Not:\n# scsi0: local-lvm:vm-101-disk-0,size=32G\n```\n\nA VM on `local-lvm` appears enrolled and healthy in the HA panel, then silently fails to recover when you need it most. There is no warning at enrollment time.\n\n## How to Test HA Failover the Right Way\n\nDo not use `systemctl poweroff` to test failover. A clean shutdown lets the node announce its departure to the cluster, which changes how the CRM handles the transition — it's not a realistic crash simulation.\n\nUse a hard power-off instead. From a machine with IPMI access:\n\n```bash\nipmitool -H 192.168.1.51 -U admin -P yourpassword chassis power off\n```\n\nAlternatively, on a dedicated test node, force a kernel panic:\n\n```bash\n# WARNING: This immediately crashes the system. Test nodes only.\necho c > /proc/sysrq-trigger\n```\n\nWatch recovery in real time from a surviving node:\n\n```bash\nwatch -n2 \"pvesh get /cluster/ha/status/current\"\n```\n\nExpected timeline:\n- **0-30s**: Corosync detects the absent node, CRM initiates fencing\n- **30-60s**: Watchdog resets the failed node, or IPMI confirms power-off\n- **60-90s**: CRM issues relocation commands; LRM brings VMs online on the surviving node\n\nIf the status stays in `recovery` past 90 seconds, the CRM is waiting on a fencing confirmation that never arrived:\n\n```bash\njournalctl -u pve-ha-crm -f\n```\n\nThe log will tell you exactly which fence operation stalled. It's almost always either `watchdog-mux` not running on every node after a reboot, or stale IPMI credentials.\n\n## Common HA Mistakes to Avoid\n\n**VM on local storage.** Enrolled in HA, appears healthy, fails silently on recovery. Verify storage before adding any resource.\n\n**Skipping the IPMI fence test.** `fence_ipmilan ... -o status` takes 10 seconds to run. Skipping it takes hours to debug when HA stalls at 3am.\n\n**Two nodes without a qdevice.** One failure, no quorum, HA freezes. Either add a third node or deploy `corosync-qnetd` on a lightweight device before relying on HA for anything real.\n\n**NTP drift.** Corosync is sensitive to clock skew. Offset over a few hundred milliseconds triggers spurious node-unreachable events. Run `timedatectl status` on each node and confirm NTP is active and synced.\n\n**max_restart set to 1.** A VM that needs 45 seconds to complete its startup health check will relocate unnecessarily on the first failed check. Set `max_restart` to at least 3 for non-trivial workloads.\n\n**No N-1 capacity planning.** HA restarts VMs, but if surviving nodes are already at 90% RAM utilization, the VMs fail to start anyway. For a three-node cluster with 128 GB per node, plan as though any single node may be absent — cap total allocated RAM at 256 GB.\n\n## Conclusion\n\nWith `watchdog-mux` confirmed running, shared storage in place, and VMs enrolled in HA groups, Proxmox automatically recovers critical workloads within 90 seconds of a node failure. Fencing isn't bureaucratic overhead — it's the safety mechanism that makes corruption-free restarts possible. Run the hard power-off test before you declare success.\n\nOnce HA is protecting your VMs at the infrastructure level, add point-in-time recovery at the data level: schedule regular backups via [Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) so that even a storage failure has a fallback beyond the last snapshot.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-high-availability-vm-failover/",
            "title": "Proxmox High Availability Setup for Automatic VM Failover",
            "summary": "Set up Proxmox HA Manager to automatically restart VMs after a node failure. Covers fencing requirements, HA group config, and live failover testing on PVE 9.1.",
            "date_modified": "2026-04-21T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "proxmox-ha",
                "high-availability",
                "vm-failover",
                "fencing",
                "cluster"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-lvm-thin-pools-vm-snapshots/",
            "content_html": "\nLVM-thin provisioning gives you copy-on-write snapshots on virtually any block device — spinning rust, SATA SSD, or NVMe — without the ECC RAM requirement or memory overhead that ZFS demands. If you're running a homelab node with 16–32 GB of system RAM and need live snapshots for VMs and containers, LVM-thin is the right answer. By the end of this guide you'll have an LVM-thin pool configured as a Proxmox storage backend, know how to snapshot and roll back VMs in seconds, and understand exactly where this approach beats ZFS and where it falls short.\n\n## Key Takeaways\n\n- **No ECC tax**: LVM-thin snapshots work on any block device with no special RAM requirements.\n- **Copy-on-write**: Snapshots consume space only for changed blocks, not a full clone of the disk.\n- **Proxmox-native**: LVM-thin is a first-class storage type in Proxmox VE 9.1 — no plugins or patches required.\n- **Snapshot chains**: Performance degrades noticeably past 3–4 chained snapshots per volume; keep chains short.\n- **Best fit**: Ideal for single-node homelabs, dedicated SSDs, and dev/test environments where ZFS overhead isn't justified.\n\n## Why Choose LVM-Thin Over ZFS?\n\nZFS is excellent for the right workload. But ZFS is memory-hungry by design — the ARC cache eats RAM aggressively, and on a node with 16 or 32 GB that's a real constraint when you also want to run ten or more VMs. ZFS also strongly prefers ECC RAM for its data integrity guarantees, and ECC-capable consumer motherboards cost meaningfully more.\n\nLVM-thin sits at the other end of the spectrum. It's a Linux kernel feature (dm-thin), runs on any block device, uses almost no RAM overhead, and gives you the one ZFS feature most admins actually need day-to-day: copy-on-write snapshots.\n\nHere's how the main Proxmox storage backends compare for VM workloads:\n\n| Storage Type | Snapshots | Thin-Provisioned | RAM Overhead | Hardware Requirement |\n|---|---|---|---|---|\n| LVM (thick) | No | No | Minimal | Any block device |\n| LVM-Thin | Yes (CoW) | Yes | Minimal | Any block device |\n| ZFS | Yes (CoW) | Yes | High (ARC) | ECC RAM preferred |\n| Directory (qcow2) | Yes (file) | Yes | Minimal | Any filesystem |\n| Ceph (RBD) | Yes (CoW) | Yes | Moderate | 3+ nodes |\n\nDirectory storage with qcow2 also supports snapshots, but qcow2 performance degrades under concurrent I/O because the format serializes writes internally. LVM-thin avoids that — snapshots are tracked by the kernel block layer, and raw disk images maintain full sequential write speed.\n\n## Prerequisites and Disk Selection\n\nYou need an unformatted block device: a whole disk, a partition, or space on an existing PV. A dedicated SSD or NVMe is the right choice. Don't carve LVM-thin out of the same disk as your Proxmox OS root — contention from OS writes will hurt VM I/O latency under load.\n\nFor this guide I'll use `/dev/sdb`, a 500 GB SATA SSD added to an existing Proxmox VE 9.1 node. Adjust device paths to match your hardware. If you're still selecting hardware, [How to Install Proxmox VE on Any Hardware](/articles/install-proxmox-ve-on-any-hardware/) covers what to look for in drives and whether consumer SSDs hold up in always-on roles.\n\nCheck the disk is clean before touching it:\n\n```bash\nlsblk -f /dev/sdb\nwipefs -a /dev/sdb   # Wipe leftover filesystem signatures if present\n```\n\n## Step 1: Create the Physical Volume and Volume Group\n\n```bash\npvcreate /dev/sdb\nvgcreate vg-thin /dev/sdb\n```\n\nVerify:\n\n```bash\npvs\nvgs\n```\n\nExpected output from `vgs`:\n\n```\n  VG       #PV #LV #SN Attr   VSize    VFree\n  pve        1  17   0 wz--n- <476.94g  <96.00g\n  vg-thin    1   0   0 wz--n- <465.76g <465.76g\n```\n\nThe `pve` VG is your existing Proxmox install. `vg-thin` is the new one, ready for the pool.\n\n## Step 2: Create the Thin Pool Logical Volume\n\nAllocate 95% of the VG to the pool and leave 5% unallocated for LVM metadata expansion. Thin pools need metadata headroom — running the VG to 100% causes hard I/O failures across every volume in the pool simultaneously.\n\n```bash\nlvcreate \\\n  --type thin-pool \\\n  --name pool0 \\\n  --extents 95%VG \\\n  vg-thin\n```\n\nVerify the result:\n\n```bash\nlvs -a vg-thin\n```\n\nYou'll see the pool LV and its hidden metadata sibling (`[pool0_tmeta]`). That's expected — LVM manages metadata allocation internally and the brackets indicate a hidden helper volume.\n\n## Step 3: Register the Thin Pool in Proxmox Storage\n\nYou can do this via the web UI or directly with `pvesm`.\n\n### Web UI Method\n\n1. Open **Datacenter → Storage → Add → LVM-Thin**\n2. Set **ID**: `ssd-thin`\n3. Set **Volume Group**: `vg-thin`\n4. Set **Thin Pool**: `pool0`\n5. Set **Content**: `Disk image, Container` (add `Snippets` if needed)\n6. Click **Add**\n\n### CLI Method\n\n```bash\npvesm add lvmthin ssd-thin \\\n  --vgname vg-thin \\\n  --thinpool pool0 \\\n  --content images,rootdir\n```\n\nVerify the storage is active:\n\n```bash\npvesm status\n```\n\nYou should see `ssd-thin` listed with `active` status and the available capacity reported correctly.\n\n## Step 4: Create VMs and Containers on LVM-Thin\n\nWhen creating a VM in the web UI, select `ssd-thin` from the storage dropdown for the disk. Via CLI:\n\n```bash\nqm create 200 \\\n  --name test-vm \\\n  --memory 2048 \\\n  --cores 2 \\\n  --net0 virtio,bridge=vmbr0\n\nqm set 200 \\\n  --scsi0 ssd-thin:32 \\\n  --ide2 ssd-thin:cloudinit \\\n  --boot order=scsi0\n```\n\nThe `ssd-thin:32` syntax allocates a 32 GB thin-provisioned volume. The pool doesn't pre-allocate 32 GB — it consumes actual disk space only as data is written. For LXC containers, the `rootdir` content type enables the same thin allocation for container root filesystems.\n\n## How to Take and Roll Back LVM-Thin Snapshots\n\nA snapshot on LVM-thin is a new thin volume that shares blocks with the origin. When either volume writes to a block, the dm-thin kernel driver copies the original block before overwriting. No data is duplicated at snapshot time — only divergences accumulate going forward.\n\nTake a snapshot of VM 200:\n\n```bash\nqm snapshot 200 pre-upgrade \\\n  --description \"Before kernel 6.12 upgrade\" \\\n  --vmstate 0\n```\n\nThe `--vmstate 0` flag skips saving RAM state, making the snapshot near-instant and much smaller. For an upgrade-and-rollback workflow, a disk-only snapshot is almost always sufficient — run `sync` inside the guest first to flush pending writes to disk.\n\nList snapshots:\n\n```bash\nqm listsnapshot 200\n```\n\nRollback:\n\n```bash\nqm rollback 200 pre-upgrade\n```\n\nRollback is instant regardless of how much data changed between snapshot and rollback. The thin pool reassigns block mappings without moving any data.\n\n## Monitoring Pool Usage Before It Causes Problems\n\nA full thin pool is a hard failure — all volumes go read-only at once. Monitor usage proactively:\n\n```bash\nlvs -o +data_percent,metadata_percent vg-thin/pool0\n```\n\nSample output:\n\n```\n  LV    VG       Attr       LSize   Pool  Origin Data%  Meta%\n  pool0 vg-thin  twi-aotz-- 440.00g             23.47  1.82\n```\n\nConfigure LVM autoextend in `/etc/lvm/lvm.conf` as a safety net:\n\n```ini\nactivation {\n  thin_pool_autoextend_threshold = 80\n  thin_pool_autoextend_percent = 20\n}\n```\n\nThis grows the pool by 20% when it hits 80% full — provided unallocated space exists in the VG. That's exactly why we left the 5% reserve during pool creation.\n\n**Gotcha from the field**: If you snapshot frequently and then delete the parent volumes without removing the snapshots first, the metadata volume grows faster than the data volume. Watch `Meta%` separately; the metadata pool is much smaller and will surprise you at an inconvenient time.\n\n## Using LVM-Thin With Proxmox Backup Server\n\nLVM-thin and Proxmox Backup Server integrate cleanly. PBS uses its own change-block-tracking (dirty-bitmap) mechanism for incremental backups, independent of LVM snapshots — it doesn't consume or require LVM snapshots internally. You can chain them yourself: take an LVM-thin snapshot before a PBS backup run to guarantee a consistent source while the VM continues running.\n\nFor backup scheduling and retention policy configuration, [Automated Backups with Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) walks through the full PBS setup — everything there applies equally to LVM-thin-backed VMs and containers.\n\n## When LVM-Thin Is Not the Right Choice\n\nLVM-thin is not a silver bullet. Here's where I'd choose something else:\n\n- **Silent corruption protection**: ZFS checksums catch bitrot during scrubs; LVM-thin does not checksum data. For NAS workloads or long-lived archival data, ZFS wins.\n- **Multi-node shared storage**: LVM-thin is strictly local to one node. For clusters requiring live migration with shared disk, Ceph RBD is the correct backend.\n- **Deep snapshot chains**: LVM-thin degrades past 3–4 chained snapshots per volume. ZFS handles deep chains more gracefully, and qcow2 can technically go deeper too.\n- **High-RAM ECC servers**: If you have 64+ GB ECC RAM and production workloads, ZFS overhead amortizes well and you gain checksumming plus native compression.\n\nFor homelab nodes where RAM is limited and snapshot capability matters more than byte-level integrity, LVM-thin is the correct default.\n\n## How to Migrate Existing VMs to LVM-Thin\n\nIf you have VMs on directory storage or thick LVM and want to move them to the thin pool, Proxmox handles it live without stopping the VM:\n\n```bash\nqm move-disk 100 scsi0 ssd-thin --delete 1\n```\n\nThe `--delete 1` flag removes the source disk after the move completes successfully. Expect the move to finish in under two minutes for a 50 GB disk on NVMe-to-NVMe; SATA-to-SATA will be closer to five minutes for the same size. Proxmox uses an internal mirroring approach — the VM stays online throughout.\n\nIf you're building out a broader homelab architecture with multiple storage tiers, [Build a Private Cloud at Home with Proxmox VE](/articles/build-private-cloud-home-proxmox-ve/) covers how LVM-thin fits alongside ZFS and Ceph on multi-role nodes.\n\n## Conclusion\n\nLVM-thin is the practical middle ground for Proxmox storage: copy-on-write snapshots, thin allocation, and solid I/O performance with no RAM overhead and no ECC requirement. Set it up on a dedicated SSD, register it in Proxmox as a storage backend, and you have a snapshot-capable layer that works on commodity hardware. Next step: configure Proxmox Backup Server to target this pool and add scheduled retention-based backups so your LVM-thin VMs are protected automatically.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-lvm-thin-pools-vm-snapshots/",
            "title": "LVM-Thin Pools on Proxmox for VM Snapshots Without ZFS",
            "summary": "Set up LVM-thin pools on Proxmox VE 9.1 for copy-on-write snapshots without ZFS memory overhead. Works on any block device with no ECC RAM required.",
            "date_modified": "2026-04-20T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "lvm",
                "storage",
                "snapshots",
                "thin-provisioning",
                "proxmox-ve"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/k3s-kubernetes-cluster-proxmox-vms/",
            "content_html": "\nDeploying K3s on Proxmox VMs gives you a production-ready Kubernetes cluster that boots from cloud-init templates in under 30 minutes. The end result: a three-node cluster — one control plane, two workers — with a working `kubeconfig` ready for `kubectl` and Longhorn handling persistent storage across both workers. This guide uses Proxmox VE 9.1 and K3s v1.32, the current stable release as of April 2026. If you've got Proxmox running and want Kubernetes without upstream complexity, K3s is the most direct path there.\n\n## Key Takeaways\n\n- **Template first**: Build one Ubuntu 24.04 cloud-init template, clone it for every node — identical base, zero config drift.\n- **VM sizing**: Control plane needs 2 vCPU and 4 GB RAM minimum; workers at 2 vCPU / 4 GB handle most homelab workloads comfortably.\n- **Networking**: K3s uses Flannel VXLAN by default — a single Proxmox Linux bridge handles it without SDN or VLAN config.\n- **Storage**: Longhorn needs a dedicated virtio disk per worker, not the OS disk — add it before deploying Longhorn.\n- **HA tradeoff**: Single control plane is fine for homelabs; true HA requires three control-plane nodes with embedded etcd, worth it only for production.\n\n## Why K3s Instead of Full Kubernetes on Proxmox\n\nK3s is a CNCF-certified Kubernetes distribution maintained by SUSE. It packages the entire control plane as a single ~70 MB binary, replaces etcd with SQLite by default (or embedded etcd for HA), and drops cloud-provider integrations that don't apply to Proxmox anyway.\n\nFor homelab and small production setups, the advantages over kubeadm-managed Kubernetes are concrete:\n\n- **Single binary install**: No `kubeadm init`, no separate etcd cluster, no kubelet config juggling.\n- **Lower RAM floor**: K3s control plane idles around 500 MB vs. 1.5–2 GB for a full Kubernetes control plane.\n- **Auto-upgrade support**: The `system-upgrade-controller` lets you roll cluster upgrades without SSH-ing into each node.\n- **Built-in ingress**: Traefik ships as the default ingress controller — functional out of the box, swappable if you prefer ingress-nginx.\n\nIf you're already [running Docker inside LXC containers on Proxmox](/articles/docker-inside-lxc-containers-proxmox/) and want to move toward orchestration, K3s is the lowest-friction upgrade path. Docker-in-LXC works well for a handful of services, but once you hit five or more containers that need health checks, scheduling, and rolling deployments, Kubernetes scheduling pays for itself immediately.\n\n## Hardware and VM Requirements\n\nYou don't need a dedicated machine. A single Proxmox host with 32 GB RAM and an NVMe drive can run this entire setup with room to spare for other VMs.\n\n| Node | vCPU | RAM | OS Disk | Extra Disk | Role |\n|------|------|-----|---------|------------|------|\n| k3s-control | 2 | 4 GB | 32 GB | — | Control plane |\n| k3s-worker-1 | 2 | 4 GB | 32 GB | 50 GB (Longhorn) | Worker |\n| k3s-worker-2 | 2 | 4 GB | 32 GB | 50 GB (Longhorn) | Worker |\n\nThe Longhorn disks are thin-provisioned virtio disks — Proxmox allocates storage lazily, so a 50 GB thin disk only consumes what Longhorn actually writes. On NVMe-to-NVMe, expect a 50 GB Longhorn volume to provision in under 10 seconds.\n\nAll three VMs on the same bridge (`vmbr0`) is sufficient for a homelab cluster. If you want to isolate cluster traffic from your LAN — and for anything exposed to the internet you should — see [configuring VLANs on Proxmox with Linux bridges](/articles/configure-vlans-proxmox-linux-bridges/) for a clean segmentation approach before you start.\n\n## Building the Base Ubuntu Cloud-Init Template\n\nEvery node in this cluster starts as a clone of the same base template. Get the template right and the rest is cloning plus one install command per node.\n\n### Download the Ubuntu 24.04 Cloud Image\n\nSSH into your Proxmox host and pull the cloud image:\n\n```bash\nwget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img \\\n  -O /tmp/noble-server-cloudimg-amd64.img\n```\n\nUbuntu cloud images ship with `cloud-init` pre-installed and the `ubuntu` user pre-configured for SSH key injection. No guest OS bootstrapping required.\n\n### Create and Configure the Template VM\n\n```bash\n# VM ID 9000 is a common convention for templates\nqm create 9000 \\\n  --name ubuntu-2404-cloud \\\n  --memory 4096 \\\n  --cores 2 \\\n  --net0 virtio,bridge=vmbr0 \\\n  --ostype l26 \\\n  --agent enabled=1\n\n# Import the cloud image as the primary disk\nqm importdisk 9000 /tmp/noble-server-cloudimg-amd64.img local-lvm\n\n# Attach it as a scsi disk and set boot order\nqm set 9000 \\\n  --scsihw virtio-scsi-pci \\\n  --scsi0 local-lvm:vm-9000-disk-0,discard=on,ssd=1 \\\n  --boot c \\\n  --bootdisk scsi0\n\n# Add the cloud-init drive\nqm set 9000 --ide2 local-lvm:cloudinit\n\n# Serial console is required for cloud-init display\nqm set 9000 --serial0 socket --vga serial0\n\n# Inject your SSH public key and set DHCP as the default\nqm set 9000 \\\n  --ciuser ubuntu \\\n  --sshkeys ~/.ssh/authorized_keys \\\n  --ipconfig0 ip=dhcp\n```\n\n### Resize the OS Disk and Convert to Template\n\nThe cloud image ships as a 2.2 GB raw disk. Resize it before converting — you cannot resize a template disk after conversion.\n\n```bash\nqm resize 9000 scsi0 32G\nqm template 9000\n```\n\nThat's the template. Every K3s node will be a full clone of VM 9000, booting with a fresh hostname and a DHCP address on first start. If the disk shows as `unused0` after import instead of being attached, run `qm set 9000 --scsi0 local-lvm:vm-9000-disk-0` to re-attach it — this happens when the storage name in the import path doesn't exactly match the storage ID in Proxmox.\n\n## How to Deploy the K3s Control Plane Node\n\n### Clone the Template and Assign a Static IP\n\nStatic IPs are not optional here. If the control plane IP changes after cluster initialization, the TLS certificates and Flannel overlay both break.\n\n```bash\n# Full clone so the worker is independent (not a linked clone)\nqm clone 9000 101 --name k3s-control --full\n\n# Static IP for the control plane\nqm set 101 --ipconfig0 ip=192.168.1.10/24,gw=192.168.1.1\n\nqm start 101\n```\n\nWait about 30 seconds for cloud-init to finish its first-boot run, then SSH in:\n\n```bash\nssh ubuntu@192.168.1.10\n```\n\n### Install K3s on the Control Plane\n\n```bash\ncurl -sfL https://get.k3s.io | sh -s - server \\\n  --cluster-init \\\n  --tls-san 192.168.1.10 \\\n  --disable traefik \\\n  --node-name k3s-control\n```\n\nFlags worth explaining:\n\n- `--cluster-init` initializes embedded etcd, enabling HA expansion later if you add more control-plane nodes.\n- `--tls-san 192.168.1.10` adds the control plane's IP to the TLS certificate SANs — required for `kubectl` connections from outside the VM.\n- `--disable traefik` is personal preference; remove this flag if you want Traefik as your ingress controller out of the box.\n\nK3s installs, starts, and enables the `k3s` systemd service in about 45 seconds on a modern CPU. Grab the node token for worker joins:\n\n```bash\nsudo cat /var/lib/rancher/k3s/server/node-token\n```\n\nCopy the kubeconfig to your local machine and fix the server address:\n\n```bash\n# Run from your local machine\nscp ubuntu@192.168.1.10:/etc/rancher/k3s/k3s.yaml ~/.kube/config\nsed -i 's/127.0.0.1/192.168.1.10/g' ~/.kube/config\nchmod 600 ~/.kube/config\n```\n\nVerify:\n\n```bash\nkubectl get nodes\n```\n\n`k3s-control` should appear in `Ready` state within 60 seconds of the install completing.\n\n## Joining Worker Nodes to the Cluster\n\nClone the template twice more:\n\n```bash\nqm clone 9000 102 --name k3s-worker-1 --full\nqm set 102 --ipconfig0 ip=192.168.1.11/24,gw=192.168.1.1\n\nqm clone 9000 103 --name k3s-worker-2 --full\nqm set 103 --ipconfig0 ip=192.168.1.12/24,gw=192.168.1.1\n\nqm start 102 && qm start 103\n```\n\nSSH into each worker and run the K3s agent installer. Replace `<your-node-token>` with the token from the previous step:\n\n```bash\ncurl -sfL https://get.k3s.io | \\\n  K3S_URL=https://192.168.1.10:6443 \\\n  K3S_TOKEN=<your-node-token> \\\n  sh -s - agent \\\n  --node-name k3s-worker-1\n```\n\nRepeat on `k3s-worker-2` with `--node-name k3s-worker-2`. Each agent join completes in under 30 seconds. From your local machine:\n\n```bash\nkubectl get nodes -o wide\n```\n\nExpected output:\n\n```\nNAME           STATUS   ROLES                       AGE   VERSION\nk3s-control    Ready    control-plane,etcd,master   5m    v1.32.3+k3s1\nk3s-worker-1   Ready    <none>                      2m    v1.32.3+k3s1\nk3s-worker-2   Ready    <none>                      1m    v1.32.3+k3s1\n```\n\n## Adding Persistent Storage with Longhorn\n\nK3s includes a `local-path` provisioner that creates node-local volumes — fine for stateless workloads, useless for anything that needs to survive pod rescheduling to a different node. Longhorn replicates block storage across your worker nodes and fixes this.\n\n### Add a Dedicated Disk to Each Worker\n\nFrom the Proxmox host (VMs stay running — virtio hotplug works on Linux 5.x+ guests):\n\n```bash\nqm set 102 --virtio1 local-lvm:50,discard=on\nqm set 103 --virtio1 local-lvm:50,discard=on\n```\n\nThe disks appear immediately as `/dev/vdb` inside the VMs. Do not partition or format them — Longhorn manages the raw block device directly.\n\n### Install Longhorn Prerequisites on Each Worker\n\n```bash\nsudo apt-get install -y open-iscsi nfs-common\nsudo systemctl enable --now iscsid\n```\n\nSkipping `open-iscsi` is the single most common reason Longhorn volumes get stuck in `Attaching`. The iSCSI initiator failure is silent — the pod just hangs.\n\n### Deploy Longhorn v1.7.1\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/deploy/longhorn.yaml\n```\n\nWatch the rollout (takes 3–4 minutes on first deploy):\n\n```bash\nkubectl -n longhorn-system get pods --watch\n```\n\nOnce all pods are running, set Longhorn as the default storage class:\n\n```bash\nkubectl patch storageclass local-path \\\n  -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}'\n\nkubectl patch storageclass longhorn \\\n  -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'\n```\n\nNow any PVC without an explicit storage class gets a Longhorn volume replicated across both workers.\n\n## Common Gotchas and How to Fix Them\n\n**Nodes stuck in `NotReady` after join**: Almost always a firewall issue. K3s needs ports 6443 (API), 8472/UDP (Flannel VXLAN), and 10250 (kubelet) open between nodes. If `ufw` is active on your VMs, it will silently drop VXLAN traffic. Either disable it or open those ports explicitly:\n\n```bash\nsudo ufw allow 6443/tcp\nsudo ufw allow 8472/udp\nsudo ufw allow 10250/tcp\n```\n\n**Node IPs change after a DHCP lease renewal**: This is why the static IP step matters. If you skipped it and used DHCP, the flannel overlay breaks when the IP changes. Fix by setting static IPs via `qm set` cloud-init config and reprovisioning the affected node.\n\n**`kubectl get nodes` shows the node as `NotReady` after a Proxmox host reboot**: Check that the `k3s` and `k3s-agent` services started correctly. Cloud-init sometimes races with systemd on first boot after a snapshot restore.\n\n```bash\nsudo systemctl status k3s          # control plane\nsudo journalctl -u k3s-agent -f    # workers\n```\n\n**Template disk shows as `unused0` after import**: Re-attach it manually:\n\n```bash\nqm set 9000 --scsi0 local-lvm:vm-9000-disk-0\n```\n\nThis happens when the storage name used in `qm importdisk` doesn't exactly match the storage pool ID shown in the Proxmox UI.\n\n## Securing the Cluster\n\nThe default K3s kubeconfig at `/etc/rancher/k3s/k3s.yaml` is world-readable on the control plane — fix that immediately:\n\n```bash\nsudo chmod 600 /etc/rancher/k3s/k3s.yaml\n```\n\nThe node token in `/var/lib/rancher/k3s/server/node-token` grants full cluster join rights. Treat it like a root password and rotate it after initial setup.\n\nFor the Proxmox host layer, [hardening Proxmox VE with firewall, fail2ban, and SSH security](/articles/hardening-proxmox-firewall-fail2ban-ssh-security/) covers host-level lockdown you should do in parallel with cluster setup.\n\nFor disaster recovery: [Proxmox Backup Server](/articles/automated-backups-proxmox-backup-server/) can snapshot all three K3s VMs at the hypervisor level, giving you a clean restore point before cluster upgrades or Kubernetes version bumps. Pair hypervisor snapshots with Longhorn's built-in snapshot support for application-level recovery.\n\n## Conclusion\n\nYou now have a three-node K3s v1.32 cluster on Proxmox VE 9.1: control plane with embedded etcd, two workers with Longhorn persistent storage, and a local `kubeconfig` ready for `kubectl`. The cloud-init template approach means adding a fourth node is a `qm clone` and a 30-second agent join — no manual OS setup. The logical next step is deploying an ingress controller (ingress-nginx requires two `kubectl apply` commands) and exposing your first service outside the cluster.\n",
            "url": "https://proxmoxpulse.com/articles/k3s-kubernetes-cluster-proxmox-vms/",
            "title": "K3s Kubernetes Cluster on Proxmox VMs Setup Guide",
            "summary": "Deploy a three-node K3s Kubernetes cluster on Proxmox VMs using cloud-init templates. From bare VMs to a working kubeconfig in 30 minutes, with Longhorn persistent storage.",
            "date_modified": "2026-04-19T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "kubernetes",
                "k3s",
                "proxmox",
                "cloud-init",
                "longhorn"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/self-host-openclaw-ai-assistant-proxmox-lxc/",
            "content_html": "\nYour laptop is not a server. It sleeps, it travels, and the moment you close the lid your local AI assistant goes dark along with it. Running OpenClaw on a Proxmox VE node changes that entirely — your homelab AI stays online 24/7, never uploads your conversations or files to a third-party cloud, and costs nothing beyond the electricity your server was already burning. If privacy, true ownership of your data, and a persistent assistant that actually remembers your context are things you care about, this guide will get you running.\n\n## Why Run OpenClaw in a Proxmox LXC\n\nProxmox VE gives you two paths for hosting a service like OpenClaw: a full virtual machine or an LXC container. For a gateway-style service, LXC wins on almost every axis.\n\nLXC containers share the host kernel, so you skip the overhead of a full hypervisor, a duplicated OS boot stack, and redundant memory pages. A well-tuned OpenClaw LXC idles at under 200 MB of RAM, compared to 500 MB or more for an equivalent VM. That headroom matters when your Proxmox node is already juggling Home Assistant, a media server, and a handful of other containers.\n\nSnapshot and backup ergonomics are another win. You can freeze the entire container filesystem in seconds with `pct snapshot`, roll it back instantly if a ClawHub skill install goes sideways, and schedule vzdump backups to fire at 2 AM without ever touching the guest OS. If you later decide to add a local inference engine like Ollama, you can expand the existing LXC or spin up a second one dedicated to model serving — Proxmox even supports GPU passthrough to LXC containers on recent kernels, so that upgrade path is wide open for your openclaw proxmox setup.\n\n## Provisioning the LXC Container\n\nLog in to the Proxmox web UI and pull the latest Debian 12 (Bookworm) LXC template from your local template store or a configured repository. Debian 12 is the safest baseline here — it ships with a recent glibc, NodeSource supports it fully, and the package ecosystem is rock-solid.\n\nA sensible baseline for OpenClaw running against a cloud API (Anthropic, OpenAI, or Google):\n\n- **CPU**: 2 cores\n- **RAM**: 4 GB\n- **Disk**: 20 GB\n- **Network**: vmbr0 or your LAN bridge, static IP recommended\n\nIf you plan to run local inference inside the same container — Ollama with a 7B parameter model, for example — bump RAM to at least 8 GB and disk to 40 GB. Local models are memory-hungry and you will feel the squeeze fast if you underallocate.\n\nCreate the container from the Proxmox host shell:\n\n```bash\npct create 200 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \\\n  --hostname openclaw \\\n  --cores 2 \\\n  --memory 4096 \\\n  --swap 512 \\\n  --rootfs local-lvm:20 \\\n  --net0 name=eth0,bridge=vmbr0,ip=dhcp \\\n  --unprivileged 1 \\\n  --features nesting=1 \\\n  --start 1\n```\n\n\nThe `--unprivileged 1` flag is the most important security decision you make here. It maps the container's root user to an unprivileged UID on the host, so a container escape does not hand an attacker real root access to your Proxmox node. The `--features nesting=1` flag is optional but worth enabling now — if you decide to run Docker-based skills via ClawHub later, you will already have it in place.\n\nAttach to the running container:\n\n```bash\npct exec 200 -- bash\n```\n\n\n## Installing OpenClaw\n\nStart with a clean system update inside the container:\n\n```bash\napt update && apt upgrade -y\napt install -y curl ca-certificates git\n```\n\n\nOpenClaw requires Node.js 22.14 as a minimum, but the project recommends Node.js 24. The cleanest way to get it on Debian 12 is via the NodeSource setup script:\n\n```bash\ncurl -fsSL https://deb.nodesource.com/setup_24.x | bash -\napt install -y nodejs\nnode --version\n```\n\n\nThe version output should read `v24.x.x`. If you prefer managing Node versions with nvm instead:\n\n```bash\ncurl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash\nsource ~/.bashrc\nnvm install 24\nnvm use 24\n```\n\n\nWith Node.js in place, run the OpenClaw one-liner installer:\n\n```bash\ncurl -fsSL https://openclaw.ai/install.sh | bash\n```\n\n\nIf you prefer going through npm directly:\n\n```bash\nnpm i -g openclaw\n```\n\n\nVerify the installation completed successfully:\n\n```bash\nopenclaw --version\n```\n\n\nIf the command is not found, make sure `/usr/local/bin` or your npm global bin path is on `PATH`. A quick `source ~/.bashrc` or a fresh shell session usually resolves it.\n\n## First-Run Onboarding\n\nOpenClaw's onboarding wizard handles provider selection, API key entry, and daemon registration in a single interactive session. Run:\n\n```bash\nopenclaw onboard --install-daemon\n```\n\n\nThe `--install-daemon` flag is what makes this a proper homelab AI deployment — it registers OpenClaw as a systemd service so the gateway starts automatically every time the LXC boots, no manual intervention needed.\n\nThe wizard will ask which AI provider you want to use:\n\n- **Anthropic** — Claude models, strong reasoning and instruction-following\n- **OpenAI** — GPT-4o and variants\n- **Google** — Gemini models via AI Studio\n- **Local (Ollama)** — No API key required, inference runs on your own hardware\n\nFor a first-time local ai proxmox lxc setup, starting with Anthropic or OpenAI is the fastest path to a working assistant. Paste your API key when prompted. You can add additional providers or switch to a local model at any time from the gateway dashboard.\n\nOnce the wizard finishes, verify the gateway is healthy:\n\n```bash\nopenclaw gateway status\n```\n\n\nIf the gateway shows as stopped, restart it and check again:\n\n```bash\nopenclaw gateway restart\nopenclaw gateway status\n```\n\n\nThe `openclaw gateway dashboard` command opens a local web UI where you can inspect connected chat channels, active sessions, memory state, and installed skills. Bookmark the dashboard URL — you will be back here often.\n\n## Connecting Telegram as Your Control Channel\n\nOf all the chat integrations OpenClaw supports — Telegram, Discord, Slack, Signal, iMessage, WhatsApp, Matrix, and Microsoft Teams — Telegram is the fastest to configure and the most practical for a homelab context. No server infrastructure required, just a bot token.\n\nOpen Telegram and start a chat with @BotFather. Send `/newbot`, follow the two-step prompts to name your bot, and copy the token it returns. It will look something like `1234567890:ABCDEFghijklmnop`.\n\nOpen the OpenClaw gateway dashboard:\n\n```bash\nopenclaw gateway dashboard\n```\n\n\nNavigate to **Channels → Telegram → Add**, paste your bot token, and save. The gateway connects immediately — no restart needed.\n\nTest it by opening your new bot in Telegram and sending a message:\n\n\nHello, what can you do?\n\n\nIf the gateway is running correctly you will get a response from your configured AI provider within a few seconds. From this point, your Telegram bot is a full-featured interface to your self-host ai assistant — it can browse the web, read and write files on the container filesystem, execute shell commands, maintain persistent memory across conversations, and trigger any skill you install from ClawHub.\n\nDiscord and Slack are straightforward second channels to add if you already run homelab infrastructure on those platforms. Both require creating an application in their respective developer portals and pasting a token or webhook URL into the gateway dashboard — the process is well-documented at docs.openclaw.ai/getting-started.\n\n## Hardening It for Your Homelab\n\nAn AI assistant with shell execution capabilities is a powerful tool. It deserves deliberate hardening before you expose it beyond your local network.\n\n### Stay Unprivileged\n\nYou already set `--unprivileged 1` at creation time. Do not change this. The unprivileged LXC boundary is the most impactful security control you have between OpenClaw and your Proxmox host kernel.\n\n### Protect Your API Key\n\nStore your API key only through the path that `OPENCLAW_CONFIG_PATH` points to. Do not hardcode it in shell scripts, do not drop it in `.bashrc`, and do not let it appear in any file that could be accidentally committed or shared. OpenClaw also respects `OPENCLAW_HOME` and `OPENCLAW_STATE_DIR` if you want to relocate its data directory to a dedicated bind mount or separate disk.\n\n### Isolate the Container on Its Own VLAN\n\nPut the OpenClaw LXC on a dedicated VLAN rather than your flat homelab LAN. On Proxmox, assign a VLAN tag to the container's network interface:\n\n```bash\npct set 200 --net0 name=eth0,bridge=vmbr0,tag=20,ip=192.168.20.10/24,gw=192.168.20.1\n```\n\n\nThis limits the blast radius if the container is ever compromised — it cannot directly reach your NAS, your router management interface, or other sensitive services without traversing your firewall and its explicit allow rules.\n\n### Use Tailscale or WireGuard for Remote Access\n\nDo not port-forward the OpenClaw gateway dashboard to the public internet. Instead, install Tailscale inside the container:\n\n```bash\ncurl -fsSL https://tailscale.com/install.sh | sh\ntailscale up\n```\n\n\nYou can now reach the dashboard from anywhere in the world over an encrypted mesh tunnel with no open router ports. If you already run a WireGuard VPN for your homelab, a split-tunnel configuration works equally well.\n\n### Review Shell Access Scope\n\nOpenClaw can execute shell commands inside the container it runs on. Before pointing it at sensitive mount points or external storage, review the permission scope in the gateway dashboard. Start with access limited to a sandboxed working directory and expand deliberately as you build confidence in how the assistant uses those capabilities.\n\n## Backups and Snapshots\n\nOne of the genuine pleasures of running a homelab AI inside Proxmox LXC is that your backup story is built in from day one.\n\n### Snapshot Before Risky Changes\n\nBefore installing a new ClawHub skill, upgrading OpenClaw, or making config changes, take a named snapshot:\n\n```bash\npct snapshot 200 pre-skill-install --description \"Before installing browser skill\"\n```\n\n\nIf anything breaks, rollback takes seconds:\n\n```bash\npct rollback 200 pre-skill-install\n```\n\n\nThis tight feedback loop makes experimenting with new skills genuinely low-risk.\n\n### Scheduled vzdump Backups\n\nSet up a recurring vzdump job under **Datacenter → Backup** in the Proxmox web UI. A nightly run at 2 AM with seven daily copies and four weekly copies retained is a solid default for a service like this. The backup captures the entire LXC — OpenClaw's state directory, configuration, skill data, and the persistent memory it has built up across your sessions.\n\nIf you are running Proxmox Backup Server, offloading these backups there gives you deduplication and long-term retention without ballooning storage costs. The persistent memory OpenClaw accumulates over weeks and months becomes genuinely valuable context — treat it with the same care you would any other stateful service.\n\n## Troubleshooting Common Issues\n\n### Gateway Won't Start\n\nStart with the status command:\n\n```bash\nopenclaw gateway status\n```\n\n\nIf it shows stopped or error, pull recent logs from systemd:\n\n```bash\njournalctl -u openclaw -n 50 --no-pager\n```\n\n\nThe most common causes are a Node.js version below 22.14, a malformed YAML configuration file, or a port conflict with another service. Run `openclaw gateway restart` and check status again after about ten seconds.\n\n### Node Version Too Old\n\nIf you see a `SyntaxError: Unexpected token` or a warning about unsupported engine versions, your Node.js is too old:\n\n```bash\nnode --version\n```\n\n\nAnything below `v22.14.0` needs to be upgraded. Revisit the NodeSource installation steps and make sure you ran `setup_24.x`, not an older variant. After reinstalling Node.js, verify `openclaw --version` resolves correctly before restarting the gateway.\n\n### Telegram Bot Is Silent\n\nWork through this checklist in order:\n\n1. Confirm the gateway is running with `openclaw gateway status`\n2. Double-check the bot token — a single transposed character breaks authentication silently\n3. Test outbound connectivity from inside the container: `curl -s https://api.telegram.org` should return a JSON response\n4. If the LXC is on a restricted VLAN, verify your firewall allows outbound port 443 to Telegram's API servers\n\nMost silent bot issues trace back to either a wrong token or a missing firewall rule on a locked-down VLAN.\n\n### Running Out of RAM With a Local Model\n\nIf you added Ollama and a quantized model and the container starts swapping heavily, you have two clean options. The faster fix is to increase the LXC memory allocation live from the Proxmox UI — no container restart is required for most configurations. The cleaner long-term fix is to move Ollama to a dedicated LXC, keeping OpenClaw's gateway lightweight and independently scalable. The two containers communicate over the internal Proxmox network bridge, and the latency is negligible on local hardware.\n\n## Conclusion\n\nRunning OpenClaw on a Proxmox LXC is one of the most satisfying things you can do with spare homelab capacity. You end up with a private, always-on AI assistant that works across all your chat apps, can browse the web, manage files, and run commands — and remembers everything across sessions without any of that context leaving your hardware.\n\nThe LXC model keeps resource overhead minimal, snapshots make skill experimentation genuinely safe, and VLAN isolation keeps your network architecture clean. The path from a fresh Proxmox host to a Telegram-connected self-host ai assistant is surprisingly short: provision the container, install Node.js 24, run the curl installer, and complete the onboarding wizard. Everything else in this guide is about making it robust.\n\nThe homelab AI rabbit hole goes as deep as you want to take it — local models, custom skills, multi-channel integrations, automated workflows. OpenClaw gives you a solid, open-source foundation to build on, entirely on your own terms.\n",
            "url": "https://proxmoxpulse.com/articles/self-host-openclaw-ai-assistant-proxmox-lxc/",
            "title": "Self-Host OpenClaw AI Assistant in Proxmox LXC",
            "summary": "Learn how to self-host OpenClaw, the open-source local AI assistant, inside a Proxmox VE LXC container. Step-by-step guide for homelab sysadmins.",
            "date_modified": "2026-04-18T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "OpenClaw",
                "AI Assistant",
                "LXC Containers",
                "Proxmox VE",
                "Self-Hosting"
            ]
        },
        {
            "id": "https://proxmoxpulse.com/articles/proxmox-two-factor-authentication-totp-webauthn/",
            "content_html": "\nLocking down your Proxmox VE dashboard with a strong password is a good start — but passwords alone aren't enough in 2026. A single phished credential, brute-forced API token, or reused password from another breach can hand an attacker full control of every VM on your node. Two-factor authentication (2FA) is the single highest-impact change you can make to your Proxmox security posture after the initial install.\n\nProxmox VE ships with built-in support for both TOTP (Time-based One-Time Passwords) and WebAuthn (hardware security keys and passkeys). Neither requires third-party software on the server — everything runs natively through the web UI. This guide walks you through enabling both methods, enforcing 2FA for all users, and recovering access if you ever lose your authenticator.\n\n## Why 2FA Is Non-Negotiable for Proxmox\n\nYour Proxmox web interface runs on port 8006 and accepts credentials directly over HTTPS. If that port is reachable from the internet — or even through a VPN or reverse proxy — it's a target.\n\nThe risks are concrete:\n\n- **Credential stuffing** — automated bots test billions of leaked username/password pairs daily\n- **Brute force** — the `root@pam` account is a known target; without 2FA, fail2ban is your only line of defense\n- **Session hijacking** — tokens compromised from other services get tried against Proxmox\n- **Insider threats** — a second factor limits damage from a guessed or shared password\n\nEven if your Proxmox node never touches the public internet, 2FA is worth enabling. A single compromised device on your LAN is all it takes.\n\n## Proxmox 2FA Options: TOTP vs WebAuthn\n\nProxmox VE 7+ supports two second-factor methods natively:\n\n**TOTP (Time-based One-Time Passwords)**\n- Works with any standard authenticator app — Aegis, Google Authenticator, Authy, Bitwarden\n- Generates a 6-digit code that rotates every 30 seconds\n- No hardware required; a smartphone is sufficient\n- Recovery codes generated at enrollment time\n\n**WebAuthn**\n- Works with hardware security keys (YubiKey, Nitrokey) and platform authenticators (Windows Hello, Touch ID, passkeys)\n- Phishing-resistant by design — the credential is bound to the exact origin URL\n- Requires browser WebAuthn support (all modern browsers qualify)\n- Best for high-security environments or users who travel with a hardware key\n\nFor most homelabs, TOTP is the right starting point. WebAuthn is worth adding if you have a YubiKey or want true phishing resistance beyond what TOTP provides.\n\n## Prerequisites\n\nBefore starting, confirm you have:\n\n- Proxmox VE 7.0 or later (VE 8/9 recommended — the 2FA UI is more polished)\n- Access to the web UI at `https://your-proxmox-ip:8006`\n- A login with `root@pam` or a user holding `Sys.Modify` on `/`\n- A TOTP app installed on your phone (Aegis on Android is excellent; any RFC 6238-compliant app works)\n\nFor WebAuthn you'll additionally need a WebAuthn-compatible hardware key or a platform authenticator, plus a working HTTPS connection with a consistent hostname.\n\n## Setting Up TOTP Authentication\n\nTOTP is the easiest 2FA method to enable and works on any device with an authenticator app.\n\n### Step 1: Open the Two-Factor Panel\n\n1. Log into the Proxmox web UI\n2. Click **Datacenter** in the left panel\n3. Navigate to **Permissions → Two Factor**\n\nThis panel lets you configure global WebAuthn settings and see which users have factors enrolled.\n\n### Step 2: Enroll TOTP for Your Account\n\n1. Click your username in the top-right corner, then select **My Settings**\n2. Under **Two Factor Authentication**, click **Add**\n3. Select **TOTP** from the method dropdown\n4. A QR code appears — scan it with your authenticator app\n5. Enter the 6-digit code your app shows to verify the enrollment\n6. **Copy your recovery keys and store them offline** — you will need these if you lose your phone\n\nFrom this point on, every login for that account will prompt for a TOTP code after the password step.\n\n### Step 3: Test Before Moving On\n\nLog out completely, then log back in. After entering your password you should see a second prompt for a one-time password. Enter the code from your app.\n\nIf the code is rejected, check that your phone's clock is synchronized — TOTP codes fail if the clock drifts more than 30–90 seconds. On the Proxmox host you can confirm NTP sync with:\n\n```bash\ntimedatectl status\n```\n\n\nLook for `NTP service: active` and a synchronized status. Clock drift on the server side causes the same problem in reverse.\n\n## Setting Up WebAuthn\n\nWebAuthn credentials are origin-bound, meaning a phishing site can't steal your key even if you're tricked into visiting it. That makes it meaningfully stronger than TOTP for anyone handling sensitive infrastructure.\n\n### Step 1: Configure WebAuthn at the Datacenter Level\n\nBefore enrolling any keys you must set the relying party parameters:\n\n1. Go to **Datacenter → Permissions → Two Factor**\n2. Scroll to the **WebAuthn** section\n3. Fill in:\n   - **Relying Party Name**: A human-readable label, e.g. `Proxmox Homelab`\n   - **ID**: The hostname used to reach Proxmox, e.g. `proxmox.local`\n   - **Origin**: The full HTTPS URL including port, e.g. `https://proxmox.local:8006`\n4. Click **Apply**\n\n```yaml\n# Example values\nrpname: \"Proxmox Homelab\"\nrpid: \"proxmox.local\"\norigin: \"https://proxmox.local:8006\"\n```\n\n\nThe `rpid` and `origin` must exactly match the URL you use to access Proxmox. If you change the hostname later, existing WebAuthn credentials will stop working.\n\n### Step 2: Enroll a Security Key\n\n1. Open **My Settings** from the top-right menu\n2. Under **Two Factor Authentication**, click **Add**\n3. Select **Security Key (WebAuthn)**\n4. Give the key a descriptive name, e.g. `YubiKey 5 NFC`\n5. Click **Register** — your browser prompts you to interact with the key (touch it, use Touch ID, etc.)\n6. Once registered, the key appears in your 2FA list\n\nEnroll at least two keys if you have them — your primary and one backup. A lost hardware key with no backup means falling back to recovery options.\n\n### Step 3: Test the WebAuthn Login\n\nLog out and back in. After the password step, your browser will prompt you to activate your security key. Touch it when the browser requests interaction.\n\nFor platform authenticators like Windows Hello you'll be prompted for your PIN or biometrics instead of a physical tap.\n\n## Enforcing 2FA Across All Users\n\nEnabling 2FA for your own account is good. Requiring it for everyone who can touch your hypervisor is better.\n\nProxmox VE 8+ introduced a **Two-Factor Policy** setting at the datacenter level that blocks dashboard access for any user without a factor enrolled.\n\n### Setting the Datacenter Policy\n\n1. Go to **Datacenter → Options**\n2. Find the **Two-Factor Authentication** field\n3. Set it to **Required**\n4. Click **OK**\n\nUsers without 2FA enrolled will be prompted to set it up on their next login and cannot proceed until they do.\n\n> **Important**: Enroll 2FA for `root@pam` before enabling this policy. Enabling it first locks out any account that hasn't enrolled — including your own.\n\n### Checking Per-User Enrollment Status\n\nFrom the web UI, **Datacenter → Permissions → Two Factor** lists all enrolled factors. From the CLI:\n\n```bash\n# List all Proxmox users\npveum user list\n```\n\n# Inspect the raw TFA config (contains hashed secrets — handle with care)\ncat /etc/pve/priv/tfa.cfg\n\n\nNever share or expose `tfa.cfg` — it contains the TOTP secrets and WebAuthn credential data for all users.\n\n## API Tokens and 2FA\n\nAPI tokens (`user@pam!tokenname`) don't support 2FA by design — they're meant for automation. This means token hygiene matters even more once 2FA is enforced for interactive logins.\n\nBest practices for API tokens:\n\n- Grant tokens **minimal permissions** — avoid `Administrator` or `PVEAdmin` roles\n- Enable privilege separation so a token can't exceed its own grants\n- Store tokens in environment variables or a secrets manager, never in plaintext config files\n- Rotate tokens periodically and audit usage in `/var/log/pve/tasks/`\n\n```bash\n# Create a scoped token with privilege separation enabled\npveum user token add automation@pve backup-token --privsep 1\n```\n\n# Grant only the specific permission needed\npveum acl modify /storage/backups \\\n  --user automation@pve!backup-token \\\n  --role PVEDatastoreUser\n\n\nWith `--privsep 1` active, the token can only use permissions explicitly assigned to it — it cannot inherit everything from the parent user account.\n\n## Recovery: Regaining Access After Losing Your Authenticator\n\nThis is the scenario most people worry about. Proxmox gives you three recovery paths in order of preference.\n\n### Option 1: Use Your Recovery Keys\n\nWhen you enrolled TOTP, Proxmox generated single-use recovery codes. If you saved them:\n\n1. On the 2FA login prompt, click **Use recovery key**\n2. Enter one of your saved codes\n3. Once inside, immediately re-enroll a new TOTP device and generate fresh recovery codes\n\nStore recovery codes in a password manager or printed in a physically secure location — not in the same place as the device you're recovering from.\n\n### Option 2: Remove 2FA via CLI\n\nIf you have SSH or console access to the node:\n\n```bash\n# Remove all 2FA factors for a user\npveum user tfa delete root@pam\n```\n\n\nAfter running this, the account can log in with password only. Re-enroll immediately.\n\n### Option 3: Edit the TFA Config Directly\n\nAs a last resort with physical console access:\n\n```bash\n# Back up first\ncp /etc/pve/priv/tfa.cfg /root/tfa.cfg.bak\n```\n\n# Edit and remove the affected user's entry\nnano /etc/pve/priv/tfa.cfg\n\n\nThe `pveum` CLI method is safer and should be tried before manually editing config files.\n\n## Additional Security Layers to Stack with 2FA\n\nTwo-factor authentication is most effective as part of a layered approach:\n\n**Restrict port 8006 by source IP**\n\nEven with 2FA enabled, there's no reason to leave the management interface open to all subnets. Scope access using the Proxmox firewall:\n\n```bash\n# Allow web UI access only from your management network\npvesh create /nodes/pve/firewall/rules \\\n  --action ACCEPT --type in --proto tcp \\\n  --dport 8006 --source 192.168.1.0/24\n```\n\npvesh create /nodes/pve/firewall/rules \\\n  --action DROP --type in --proto tcp --dport 8006\n\n\n**Use a dedicated non-root daily account**\n\nCreate a Proxmox-realm admin account for routine work and reserve `root@pam` for break-glass access only:\n\n```bash\npveum user add admin@pve --comment \"Daily admin\"\npveum acl modify / --user admin@pve --role Administrator\n```\n\n\nEnroll 2FA on the daily account and stop interactive root logins.\n\n**Monitor failed logins**\n\n```bash\n# Tail failed authentication attempts in the proxy log\ngrep -i \"authentication failure\" /var/log/pveproxy/access.log | tail -20\n```\n\n\nConsider shipping this log to a central SIEM or at minimum checking it weekly.\n\n## Conclusion\n\nTwo-factor authentication is one of the highest-value security changes you can make to a Proxmox VE installation. TOTP takes five minutes to set up and works with any authenticator app — there's no valid reason to leave it disabled. WebAuthn raises the bar further with phishing-resistant credentials for anyone handling production infrastructure.\n\nThe combination of 2FA, SSH key authentication, scoped API tokens, and Proxmox firewall rules closes the most common attack paths against the management plane. Enable 2FA today, save your recovery codes somewhere physically secure, then layer on the remaining controls. Your future self will appreciate it the next time a password shows up in a breach notification.\n",
            "url": "https://proxmoxpulse.com/articles/proxmox-two-factor-authentication-totp-webauthn/",
            "title": "Proxmox Two-Factor Authentication: TOTP and WebAuthn Setup",
            "summary": "Enable TOTP and WebAuthn 2FA on Proxmox VE to protect your dashboard from credential attacks — step-by-step setup, enforcement, and recovery guide.",
            "date_modified": "2026-04-17T00:00:00.000Z",
            "author": {
                "name": "Proxmox Pulse"
            },
            "tags": [
                "proxmox",
                "security",
                "two-factor-authentication",
                "totp",
                "webauthn"
            ]
        }
    ]
}