PodWarden
User ManualPodWarden

Clusters

K3s cluster list and detail pages with nodes, endpoints, and live status

PodWarden clusters page
All K3s clusters managed by PodWarden with live status indicators

What you see

URL: /clusters (list), /clusters/[id] (detail)

The clusters page shows all K3s clusters managed by PodWarden. You can switch between two view modes using the toggle in the top-right corner:

  • List view -- A table with sortable columns.
  • Tile view -- Cards showing each cluster with key metrics at a glance.

Fields / columns

ColumnDescription
NameCluster display name
EnvironmentLabel such as production, staging, or development
NodesNumber of nodes in the cluster (live count from the K3s API)
EndpointsNumber of service endpoints exposed by the cluster (live)
Workload podsNumber of running pods across all workload namespaces (live)
KubeconfigIndicates whether a kubeconfig is stored for this cluster

Live columns are refreshed each time the page loads or when you click Refresh.

Available actions

ActionWhereWhat it does
RefreshList page toolbarRe-fetches live data (nodes, endpoints, pods) from all clusters
Toggle viewList page toolbarSwitches between Tile and List view modes
CreateList page toolbarOpens the cluster creation form
DeleteList page / Detail pageRemoves the cluster record. A confirmation modal shows all affected resources (assignments, ingress rules, deployments) that will be orphaned
View detailClick a cluster row or tileOpens the cluster detail page
CI snippetDetail pageGenerates a .gitlab-ci.yml snippet with cluster-specific values for build and deploy stages

Delete confirmation

PodWarden cluster delete confirmation modal
Delete confirmation showing affected resources that will be orphaned

Before deleting a cluster, PodWarden shows a confirmation dialog listing all resources that depend on it -- deployments, ingress rules, and deployments. This helps you understand the impact before proceeding.

Cluster detail page

URL: /clusters/[id]

PodWarden cluster detail page
Cluster detail view showing nodes, endpoints, and connection status

The detail page shows comprehensive information about a single cluster:

Cluster info

  • Name and Environment label
  • API server URL -- The K3s API endpoint (auto-detected or manually overridden)
  • Effective API server URL -- The URL PodWarden actually uses for kubectl. If you set an override, it shows (override) next to the URL.
  • Kubeconfig -- Download or view the stored kubeconfig
  • Created at -- When the cluster was registered
  • Network types -- Available connectivity (public, mesh, lan)

API Server URL Override

PodWarden auto-detects the best API server URL based on the control plane host's internal IP, Tailscale IP, or SSH address. If the auto-detected URL doesn't work for your setup (e.g., you use a load balancer or custom DNS), you can set a manual override:

  • Click the API server URL to edit it
  • Enter your custom URL (e.g., https://my-lb.example.com:6443)
  • Click Reset to Auto to clear the override

Nodes

A table of all nodes in the cluster:

ColumnDescription
NameKubernetes node name
Rolecontrol-plane, worker, or both
StatusReady, NotReady, or Unknown
IPInternal IP address
OSOperating system and kernel version
ResourcesCPU and memory capacity

Longhorn Storage

If Longhorn distributed storage is installed, a Longhorn Storage card shows the current status:

StatusBadgeMeaning
readyGreenAll pods running, nodes schedulable — storage is fully operational
startingAmberLonghorn is still initializing (pods not all ready yet)
degradedRedAll pods running but storage nodes not schedulable (usually disk space)

The card shows:

  • Pod count -- e.g., "19/19 ready"
  • Node schedulability -- per-node status
  • Run Speed Test button -- runs a 100 MB write/read benchmark using dd through a temporary PVC, reports throughput in MB/s

If Longhorn is not installed, this card does not appear.

Storage classes

Lists all Kubernetes StorageClasses available in the cluster. If no storage classes exist, PodWarden shows a recommendation to install Longhorn or OpenEBS.

Endpoints

A list of service endpoints exposed by the cluster, showing the service name, namespace, port, and protocol.

Node management

The cluster detail page provides node-level operations for maintenance and workload migration:

ActionDescription
CordonMarks a node as unschedulable — no new pods will be placed on it, but existing pods continue running
UncordonMarks a cordoned node as schedulable again
DrainEvicts all pods from a node. Automatically cordons the node first. Useful before hardware maintenance or node removal
Delete nodeRemoves a node from the K3s cluster

Drain options

OptionDefaultDescription
forcefalseForce drain even if pods aren't managed by a controller
ignore_daemonsetstrueSkip DaemonSet pods (they're expected on every node)
delete_emptydir_datatrueDelete pods with emptyDir volumes
timeout_seconds120How long to wait before giving up

Workload migration

To migrate a deployment from one node to another:

  1. Cordon the source node (prevents new scheduling)
  2. Drain the node (evicts all pods)
  3. Update the deployment's placement to the target node
  4. Redeploy the workload

With Longhorn distributed storage, persistent volumes are replicated across nodes — no manual data migration is needed. PodWarden runs a pre-flight check to verify the target node is in the volume's allowed node affinity list before deploying.

After deployment, PodWarden performs a post-deploy health check (polls pod readiness for up to 90 seconds) and transitions the deployment to error if pods don't become healthy.

Status badges

BadgeMeaning
connectedPodWarden can reach the K3s API
unreachableAPI server is not responding
no kubeconfigNo kubeconfig stored -- cluster cannot be managed

Related docs

Clusters | PodWarden Hub