PodWarden
Guides

Cloud Provider Support

Deploy PodWarden on AWS, GCP, Azure, Hetzner, Alibaba Cloud, and other cloud providers

Overview

PodWarden automatically detects cloud environments and configures K3s networking correctly. This guide covers what PodWarden does behind the scenes and what you need to know when deploying on cloud infrastructure.

Supported Providers

PodWarden detects and displays logos for the following platforms:

ProviderDetectionLogo
AWSEC2 metadata (IMDSv1 + IMDSv2)AWS orange
Google CloudGCE metadataGCP blue
AzureIMDS endpointAzure blue
HetznerHetzner metadataHetzner red
Alibaba CloudAlicloud metadataAlicloud orange
ProxmoxDMI product nameProxmox orange
QEMU/KVMsystemd-detect-virtQEMU orange
VMwaresystemd-detect-virtText label
Bare MetalNo virtualization detectedServer icon

The detected provider appears as a badge on the host detail page and as an icon on the hosts list.

Cloud NAT — What PodWarden Handles Automatically

On cloud providers like AWS, your server's public IP is not bound to a local network interface — it's mapped via NAT at the hypervisor level. This causes issues for K3s because:

  1. --node-ip must be an IP that's locally bound (the private VPC IP)
  2. Kubelet TLS certificates must match the actual internal IP
  3. The Kubernetes API endpoint must DNAT to a reachable address

PodWarden handles all of this automatically. During provisioning, the Ansible playbook:

  • Detects whether the target IP is locally bound or behind NAT
  • Sets --node-ip to the private/internal IP (e.g., 172.31.x.x on AWS)
  • Sets --node-external-ip to the public IP for external access
  • Writes advertise-address to the K3s config so the API endpoint routes correctly
  • Stores the detected topology in the database for the UI and health checks

No manual flags or configuration needed.

Network Topology

After provisioning, PodWarden stores and displays the detected network topology:

FieldExampleDescription
Internal IP172.31.4.34Private VPC/LAN IP — used for --node-ip
External IP18.191.200.207Public/NAT IP — used for --node-external-ip
NAT Typecloud_natHow the host connects: direct, cloud_nat, tailscale, or unknown
Cloud ProviderawsDetected cloud platform

This information appears on the host detail page in the Network Topology card with the provider's logo.

Kubeconfig Management

PodWarden caches the kubeconfig in the database after fetching it via SSH. For cloud NAT environments, the kubeconfig uses the internal IP as the API server URL — this is the correct address for kubectl to reach the K3s API from both the same host and same-network hosts.

If you need to use a different API server URL (e.g., through a load balancer or custom DNS), you can set an API Server URL override on the cluster detail page.

Reconfigure Networking

If your host's network topology changes (e.g., you move from Tailscale to direct VPC networking, or the host gets a new IP), you can reconfigure K3s networking without wiping the cluster:

  1. Go to the host detail page
  2. Click Reconfigure Networking in the cluster membership section
  3. PodWarden will re-detect the topology, update K3s --node-ip / --node-external-ip, regenerate TLS certificates, and restart K3s

This is a non-destructive operation — your workloads continue running. The host's kubeconfig is automatically invalidated and re-fetched.

The same action is available via the API (POST /hosts/{id}/reconfigure-networking) and MCP (reconfigure_host_networking tool).

Deployment Topologies

PodWarden supports all common cloud deployment patterns:

Single-node cloud (PodWarden + K3s on same host)

The simplest setup. Install PodWarden via Docker Compose on an EC2/GCE/Azure VM, then provision the same host as a K3s control plane. PodWarden detects the cloud NAT and configures everything correctly.

Multi-node VPC (no Tailscale)

Add hosts by their private VPC IPs using the Add Host button. All nodes in the same VPC can reach each other directly. No Tailscale needed.

Hybrid cloud + on-prem via Tailscale

Mix cloud and on-prem nodes using Tailscale for connectivity. PodWarden automatically tests which IP the agent can use to reach the control plane (internal → Tailscale → public) and selects the first reachable one.

Multi-cloud

Nodes across different cloud providers connected via Tailscale. Each node gets the correct --node-ip for its local network, and the Tailscale overlay handles cross-cloud communication.

Longhorn Storage on Cloud

Longhorn distributed storage works on cloud instances. After provisioning, check the Longhorn Storage card on the cluster detail page for status.

Important: Longhorn requires sufficient disk space. It reserves 15% of the disk as minimum available storage. On small cloud instances (e.g., AWS t3.micro with 8 GB disk), this threshold can prevent Longhorn from scheduling new volumes. The System Messages page will alert you if a node is not schedulable due to disk space.

You can run a storage speed benchmark from the cluster detail page to measure Longhorn's read/write performance on your cloud instance.

AWS-Specific Notes

  • Security Groups: Port 6443 (K3s API) must be open between nodes. If using Tailscale, this is handled automatically.
  • IMDSv2: PodWarden supports both IMDSv1 and IMDSv2 for metadata detection.
  • Instance types: Any instance type works. Minimum recommended: 2 vCPU, 4 GB RAM, 20 GB disk for a single-node cluster with Longhorn.
  • EBS volumes: Longhorn stores data on the root EBS volume by default. For production, consider attaching a dedicated EBS volume.

Troubleshooting

"Cluster unreachable" after provisioning

The kubeconfig may have the wrong API server URL. Check the cluster detail page — if the URL shows a public IP that's not reachable from the PodWarden host, set an API Server URL override to the internal IP.

Longhorn "not schedulable"

Usually a disk space issue. Check df -h on the host. Longhorn needs at least 15% of the disk free. Free up space or add more storage.

Provisioning fails with SSH errors

Ensure the SSH key is authorized on the target host and the SSH user has sudo access. For AWS, the default user is ubuntu (needs sudo) or root (if configured).