End-to-end guide to deploying PodWarden and provisioning a K3s cluster from scratch
Overview
This guide walks through a complete PodWarden deployment from bare metal (or VMs) to a working K3s cluster. It is based on real-world deployments on Proxmox VMs, but the same steps apply to bare metal servers, cloud instances, or any mix of the three.
By the end you will have:
PodWarden running on a bootstrap machine
A K3s control plane provisioned via PodWarden
One or more worker nodes joined to the cluster
Longhorn distributed storage installed and healthy
Traefik ingress controller ready for traffic
Prerequisites
Node Requirements
Each node that will join the K3s cluster needs:
Ubuntu 24.04 LTS (server or minimal install)
2 CPU cores / 4 GB RAM minimum (control plane nodes need more for etcd)
40 GB disk minimum (Longhorn reserves space for replicas)
Static IP or DHCP reservation on the LAN
SSH access with a user that has passwordless sudo
Other Linux distributions may work but are not tested. Ubuntu 24.04 is the only officially supported target.
Bootstrap Machine
The machine where PodWarden itself runs. This can be one of the cluster nodes or a separate management host.
Docker and Docker Compose installed
Network access to all nodes via SSH (port 22)
If nodes are on different networks: Tailscale installed and authenticated
Tailscale (Optional)
Tailscale is optional but recommended if your nodes span multiple networks or locations. Install it on the bootstrap machine and all nodes before starting. PodWarden uses Tailscale for host discovery and can provision nodes across network boundaries.
Warning: Do not enable the --ssh flag on Tailscale. Tailscale SSH replaces the system SSH daemon, which breaks Ansible-based provisioning. PodWarden needs standard OpenSSH on port 22 to provision hosts. Use Tailscale only for network connectivity, not for SSH access management.
The installer is interactive — it checks prerequisites, asks for configuration, generates .env and docker-compose.yml, pulls images, and starts PodWarden.
Default install directory: /opt/podwarden/
Manual Installation
mkdir -p /opt/podwarden && cd /opt/podwarden# Download the production compose filecurl -fsSL https://git.mediablade.net/podwarden/podwarden/-/raw/main/docker-compose.prod.yml \ -o docker-compose.yml# Create .env (see next section for all variables)cp .env.example .env# Pull and startdocker compose pulldocker compose up -d
The API uses network_mode: host so it can SSH directly to target nodes for provisioning.
Environment Configuration
Edit /opt/podwarden/.env with all required settings. Every variable is explained below.
Core Settings
Variable
Required
Default
Description
PW_POSTGRES_PASSWORD
Yes
—
Database password. Generate a strong random value.
PW_POSTGRES_HOST
No
pw-db
Database hostname. Leave default for Docker Compose.
PW_POSTGRES_PORT
No
5432
Database port. Change if you have a port conflict on 5432.
PW_POSTGRES_DB
No
podwarden
Database name.
PW_POSTGRES_USER
No
podwarden
Database user.
PW_API_PORT
No
8000
API port on the host.
PW_UI_PORT
No
3000
UI port on the host.
NEXT_PUBLIC_PW_API_URL
Yes
—
Full URL to the API from the browser, e.g. http://10.10.0.50:8000
FRONTEND_URL
Yes
—
Full URL to the UI, e.g. http://10.10.0.50:3000
Encryption Key
Variable
Required
Description
PW_ENCRYPTION_KEY
Yes
32-byte key, base64-encoded. Used to encrypt secrets at rest.
Generate it with:
openssl rand -base64 32
Warning: The encryption key must be base64-encoded 32 bytes. A common mistake is using openssl rand -hex 32, which produces a 64-character hex string — this is the wrong format and will cause cryptographic errors when PodWarden tries to encrypt or decrypt secrets. If you see errors related to secret decryption, check this first.
Example of a correct key: K7gNU3sdo+OL0wNhqoVWhr3g6s1xYv72ol/pe/Unols=
Example of an incorrect key (hex): 2b7e151628aed2a6abf7158809cf4f3c762e7160f38b4da56a784d9045190cfe
Authentication
Variable
Required
Description
PW_TEMP_ADMIN_USERNAME
For setup
Temporary admin username for initial access
PW_TEMP_ADMIN_PASSWORD
For setup
Temporary admin password
PW_OIDC_ISSUER_URL
For SSO
Keycloak or other OIDC provider URL
PW_OIDC_CLIENT_ID
For SSO
OAuth client ID
PW_OIDC_CLIENT_SECRET
For SSO
OAuth client secret
PW_OIDC_REDIRECT_URI
For SSO
OAuth callback URL
Start with temp admin credentials for initial setup. After configuring OIDC or creating local users, remove the temp admin variables and restart.
Tailscale (Optional)
Variable
Required
Description
PW_TAILSCALE_API_KEY
No
Tailscale API key for device discovery
PW_TAILSCALE_TAILNET
No
Your tailnet name
PW_HOST_TAG_FILTER
No
Only discover hosts with this Tailscale tag
SSH / Provisioning
Variable
Default
Description
PW_SSH_KEY_PATH
—
Path to SSH private key inside the API container
PW_SSH_USER
root
SSH user for provisioning
Hub Connection (Optional)
Variable
Description
PODWARDEN_HUB_URL
Hub URL (default: https://apps.podwarden.com)
PODWARDEN_HUB_API_KEY
Hub API key (starts with pwc_)
Creating the Admin User
Set PW_TEMP_ADMIN_USERNAME and PW_TEMP_ADMIN_PASSWORD in .env
Start PodWarden: docker compose up -d
Open http://<bootstrap-ip>:3000 in your browser
Log in with the temp admin credentials
Go to Settings > Users and create a permanent local user or configure OIDC
Remove PW_TEMP_ADMIN_USERNAME and PW_TEMP_ADMIN_PASSWORD from .env
Restart: docker compose restart podwarden-api
Generating and Installing SSH Keys
PodWarden provisions nodes over SSH using Ansible. You need an SSH key pair that PodWarden can use to connect to all nodes.
Generate a Key Pair in PodWarden
Go to Settings > Secrets
Click Generate SSH Key Pair
Enter a name (e.g. provisioning)
PodWarden generates an ed25519 key pair and stores both keys as encrypted secrets
Install the Public Key on Each Node
Warning: PodWarden generates the SSH key pair but does not automatically install the public key on target nodes. You must do this manually before provisioning.
Copy the public key from Settings > Secrets (the key named {name}_public), then install it on each node:
# On each target node, as the provisioning user:mkdir -p ~/.ssh && chmod 700 ~/.sshecho "ssh-ed25519 AAAA... podwarden" >> ~/.ssh/authorized_keyschmod 600 ~/.ssh/authorized_keys
Or use ssh-copy-id from the bootstrap machine if you have password-based SSH access:
# From the bootstrap machinessh-copy-id -i /path/to/podwarden_key.pub user@node-ip
Verify SSH Access
Before proceeding, verify PodWarden can reach each node. From the bootstrap machine:
Each node should respond with its hostname and the provisioning user.
Adding and Probing Hosts
Adding Hosts
There are two ways to add hosts:
Tailscale Discovery (if configured):
Go to Hosts and click Discover
PodWarden queries the Tailscale API and lists all devices matching your tag filter
Select the hosts to import
Manual Add:
Go to Hosts and click Add Host
Enter the hostname, IP address (LAN or Tailscale), and SSH user
Save
Probing Hosts
After adding hosts, probe them to collect system information:
Select a host and click Probe
PodWarden SSHes to the host and collects: OS version, CPU, RAM, disk, network interfaces, GPU info, Docker version
The host detail page shows all discovered information
Probing also auto-detects network types (LAN, mesh, public) based on the host's network interfaces. See the Networking guide for details.
Note: If probing fails, check that SSH keys are installed correctly and the user has passwordless sudo. PodWarden runs commands like lscpu, free, lsblk, and ip addr during probing.
Provisioning the Control Plane
The control plane is the first node in your K3s cluster.
Go to Hosts and select the node you want as the control plane
Click Provision (or go to Provisioning)
Select Control Plane as the role
Choose the K3s version (latest stable is recommended)
Click Start Provisioning
PodWarden runs Ansible playbooks that:
Install K3s in server mode
Configure the node's network interfaces for flannel
Provisioning takes 2-5 minutes. Watch the provisioning log for progress.
Verify the Control Plane
After provisioning completes:
The host status should show provisioned with role control_plane
A new cluster appears in the Clusters page
The cluster detail page shows one node in Ready state
Longhorn volumes should appear in the cluster's storage section
Joining Worker Nodes
With the control plane running, add worker nodes to expand the cluster.
Go to Hosts and select a probed host
Click Provision
Select Worker as the role
Select the cluster to join (the one created by the control plane)
Click Start Provisioning
Note: The provision API uses query parameters (not a JSON body) for the worker join request. This is an implementation detail, but relevant if you are scripting provisioning via the API directly:
POST /api/v1/hosts/{id}/provision?role=worker&cluster_id={cluster_id}
Repeat for each additional worker node. Worker provisioning takes 1-3 minutes per node.
Node Labels and Roles
After joining, you can label nodes for workload scheduling:
Gateway nodes: Label with node-role.kubernetes.io/gateway: "true" to pin Traefik. See the Networking & Ingress guide for details.
Storage nodes: Longhorn automatically uses all nodes with available disk space.
GPU nodes: PodWarden detects NVIDIA GPUs during probing and installs drivers during provisioning if detected.
Verifying the Cluster
After all nodes are provisioned, verify everything is healthy:
From the PodWarden UI
Clusters page: cluster status should be healthy, all nodes Ready
Hosts page: all provisioned hosts show their role and cluster
Click into the cluster to see node details, Longhorn status, and available StorageClasses
From the Command Line (Optional)
If you have SSH access to the control plane node:
# Check node statussudo k3s kubectl get nodes -o wide# Check system podssudo k3s kubectl get pods -n kube-system# Check Longhornsudo k3s kubectl get pods -n longhorn-system# Check StorageClassessudo k3s kubectl get storageclass
Expected output: all nodes in Ready state, all kube-system and longhorn-system pods running, and at least the longhorn StorageClass available.