PodWarden
Guides

Networking, DNS & Ingress

Router configuration, DNS setup, Traefik customization, and the ndots problem — practical guide from real deployments

Overview

This guide covers the network path from the public internet to your workloads. It is a companion to the Ingress, Gateway Nodes & DDNS guide, which covers PodWarden's ingress features. This guide focuses on the infrastructure layer beneath: router configuration, DNS records, Traefik tuning, and common pitfalls discovered during real deployments.

Network Architecture

A typical self-hosted PodWarden deployment has this network layout:

  1. Router (pfSense, VyOS, or consumer router) — handles NAT and port forwarding
  2. Gateway node — a K3s node with Traefik that receives all inbound HTTP/HTTPS traffic
  3. Worker nodes — additional K3s nodes running workloads
  4. PodWarden bootstrap — the machine running PodWarden itself (can be a cluster node or separate)

All nodes are on the same LAN (e.g. 10.10.0.0/24). The gateway node has a predictable LAN IP that the router forwards ports to. The router's WAN interface has the public IP.

DNS Setup

Wildcard A Record

The simplest approach is a wildcard DNS record that sends all subdomains to your public IP. This avoids creating individual DNS records for each service.

In Cloudflare (or any DNS provider):

TypeNameContentProxy
A*.example.com203.0.113.50DNS only (gray cloud)
Aexample.com203.0.113.50DNS only (gray cloud)

Replace 203.0.113.50 with your actual public IP. The wildcard covers all subdomains (app.example.com, grafana.example.com, etc.).

Note: Set the record to DNS only (gray cloud in Cloudflare), not proxied. Cloudflare's proxy can interfere with Let's Encrypt HTTP-01 challenges and makes PodWarden's DNS health checks show false mismatches.

Automatic DNS via PodWarden

If you register your domain in PodWarden's Domains page with Cloudflare credentials, PodWarden can automatically create A records when you create ingress rules. See Ingress, Gateway Nodes & DDNS for details.

DDNS for Dynamic IPs

If your public IP changes, configure DDNS in Settings > DDNS to keep records updated. PodWarden checks your IP every 5 minutes and updates Cloudflare, DuckDNS, or custom webhook providers. See the DDNS section for setup instructions.

Port Forwarding

Your router must forward ports 80 (HTTP) and 443 (HTTPS) from the WAN interface to the gateway node's LAN IP.

pfSense

  1. Navigate to Firewall > NAT > Port Forward
  2. Create two rules:

Rule 1 — HTTP:

FieldValue
InterfaceWAN
ProtocolTCP
DestinationWAN address
Destination Port80
Redirect Target IP10.10.0.100 (gateway node LAN IP)
Redirect Target Port80
DescriptionHTTP to K3s gateway

Rule 2 — HTTPS:

FieldValue
InterfaceWAN
Destination Port443
Redirect Target IP10.10.0.100
Redirect Target Port443
DescriptionHTTPS to K3s gateway
  1. Save and apply.

VyOS

set nat destination rule 100 description 'HTTP to K3s gateway'
set nat destination rule 100 destination port '80'
set nat destination rule 100 inbound-interface name 'eth0'
set nat destination rule 100 protocol 'tcp'
set nat destination rule 100 translation address '10.10.0.100'
set nat destination rule 100 translation port '80'

set nat destination rule 110 description 'HTTPS to K3s gateway'
set nat destination rule 110 destination port '443'
set nat destination rule 110 inbound-interface name 'eth0'
set nat destination rule 110 protocol 'tcp'
set nat destination rule 110 translation address '10.10.0.100'
set nat destination rule 110 translation port '443'

commit
save

Replace eth0 with your WAN interface name and 10.10.0.100 with your gateway node's LAN IP.

Hairpin NAT Warning

Warning: Do not NAT traffic that originates from the LAN subnet. If a pod on the cluster makes an HTTPS request to a domain that resolves to your public IP, the traffic will go out to the router and get NAT'd back in. This creates a hairpin loop that can cause connection failures, especially with wildcard DNS records.

The problem: With a wildcard DNS record (*.example.com pointing to your public IP), every DNS lookup for any subdomain returns your public IP — including lookups from inside the cluster. When a pod resolves api.example.com, it gets the public IP, sends the request to the router, which NAT's it back to the gateway node. This works sometimes but fails under load or with certain connection tracking configurations.

The fix in pfSense:

Add an exclusion to each NAT rule so traffic from the LAN subnet is not forwarded:

  1. Edit each port forward rule
  2. Under Source, set Source Address to ! LAN net (invert match)
  3. This ensures only WAN-originated traffic is forwarded

The fix in VyOS:

Explicitly match only the WAN source:

set nat destination rule 100 source address '!10.10.0.0/24'
set nat destination rule 110 source address '!10.10.0.0/24'
commit
save

For traffic from inside the cluster to your own services, consider using Kubernetes internal DNS names (svc.cluster.local) instead of public domains.

Traefik Configuration

K3s installs Traefik as the default ingress controller via a HelmChart resource. PodWarden creates standard Kubernetes Ingress resources that Traefik picks up automatically.

Pinning Traefik to the Gateway Node

By default, K3s can schedule Traefik on any node. Since only the gateway node receives port-forwarded traffic from the router, Traefik must run on that specific node.

Step 1: Label the gateway node

kubectl label node k3s-gw node-role.kubernetes.io/gateway=true

Step 2: Patch the Traefik deployment with a nodeSelector

kubectl -n kube-system patch deployment traefik \
  --type=json \
  -p='[{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}}]'

Note: You might expect to use a HelmChartConfig to set the nodeSelector, but in practice patching the deployment directly is more reliable. See the HelmChartConfig Limitations section below.

After patching, verify Traefik is running on the gateway node:

kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik -o wide

Let's Encrypt ACME (HTTP Challenge)

Traefik obtains TLS certificates automatically via Let's Encrypt using the HTTP-01 challenge. This requires:

  1. Port 80 forwarded to the gateway node (for the ACME challenge)
  2. DNS records pointing to your public IP
  3. Traefik's certificate resolver configured (K3s does this by default)

Certificates are stored in /data/acme.json inside the Traefik pod. K3s persists this as a file on the node, so certificates survive pod restarts.

To check certificate status:

# View Traefik logs for ACME activity
kubectl -n kube-system logs -l app.kubernetes.io/name=traefik --tail=100 | grep -i acme

The ndots:5 Problem

This is a subtle issue that affects outbound HTTPS from pods when you use wildcard DNS.

The problem:

Kubernetes sets ndots:5 in pod DNS configuration by default. This means any hostname with fewer than 5 dots is first tried with cluster search domains appended. When a pod resolves api.stripe.com (2 dots, less than 5), the resolver first tries:

  1. api.stripe.com.default.svc.cluster.local
  2. api.stripe.com.svc.cluster.local
  3. api.stripe.com.cluster.local
  4. api.stripe.com (the actual query, tried last)

With a wildcard DNS record for your domain, if any of the intermediate queries leak to your external DNS (which can happen with certain CoreDNS configurations), they may resolve to your public IP instead of failing. This causes Traefik to receive HTTPS requests intended for external services.

Symptoms:

  • Pods get TLS errors when making outbound HTTPS requests
  • Traefik logs show requests for unexpected domains
  • External API calls from pods fail intermittently

The fix: Set ndots:2 on the Traefik deployment so its DNS lookups bypass the search domain expansion:

kubectl -n kube-system patch deployment traefik \
  --type=json \
  -p='[{"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}]'

Note: This fix applies specifically to Traefik. Most workload pods are not affected because their outbound traffic goes directly to the destination, not through Traefik. However, if you see DNS-related issues in other pods, applying ndots:2 via a pod's dnsConfig is the same fix.

HelmChartConfig Limitations

K3s provides a HelmChartConfig CRD to customize Helm charts deployed by K3s (like Traefik). In theory, you can set nodeSelector and dnsConfig through it:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    nodeSelector:
      node-role.kubernetes.io/gateway: "true"
    dnsConfig:
      options:
        - name: ndots
          value: "2"

Warning: In practice, HelmChartConfig's dnsConfig does not propagate to the Traefik pod spec reliably. The nodeSelector via HelmChartConfig generally works, but dnsConfig may be silently ignored depending on the Traefik Helm chart version. Use kubectl patch deployment directly for both settings to be safe.

Combined Traefik Patch

Apply both the nodeSelector and dnsConfig fixes in a single patch:

kubectl -n kube-system patch deployment traefik \
  --type=json \
  -p='[
    {"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}},
    {"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}
  ]'

Verify the patch was applied:

kubectl -n kube-system get deployment traefik -o jsonpath='{.spec.template.spec.nodeSelector}'
# Expected: {"node-role.kubernetes.io/gateway":"true"}

kubectl -n kube-system get deployment traefik -o jsonpath='{.spec.template.spec.dnsConfig}'
# Expected: {"options":[{"name":"ndots","value":"2"}]}

Creating Ingress Rules in PodWarden

With DNS and port forwarding configured, create ingress rules in PodWarden to route traffic to your workloads.

  1. Enable the gateway. Go to Hosts, select the gateway node, and toggle Enable as Gateway Node. PodWarden detects the public IP automatically.

  2. Create an ingress rule. Go to Ingress > New Rule:

    • Domain: e.g. grafana.example.com
    • Backend: select a deployed workload or enter a manual IP:port
    • Gateway Host: select your gateway node
    • TLS: enabled (Let's Encrypt)
  3. Deploy the rule. Click the deploy button. PodWarden creates the Kubernetes Ingress and Service resources.

  4. Verify. Use the health check buttons (DNS, HTTP, TLS) to confirm everything works end-to-end.

See Ingress, Gateway Nodes & DDNS for the full reference on backend types, multi-path routing, HTTPS backends, and troubleshooting.

Configuring Domains in PodWarden

Register your domains in PodWarden for automatic DNS record management:

  1. Go to Settings > Domains (or the Domains section in the sidebar)
  2. Click Add Domain
  3. Enter the domain name (e.g. example.com)
  4. Optionally add Cloudflare credentials (Zone ID + API Token with Zone:DNS:Edit permission)
  5. Save

With Cloudflare credentials, PodWarden can automatically create and update A records when you create ingress rules. Without credentials, you manage DNS manually.

DDNS Setup

If your ISP assigns a dynamic public IP, configure DDNS to keep DNS records current:

  1. Go to Settings > DDNS
  2. Click Add Config
  3. Select your provider (Cloudflare, DuckDNS, Webhook, or Hub)
  4. Enter the required credentials
  5. PodWarden checks your IP every 5 minutes and updates records on change

See the DDNS section for provider-specific configuration.

TLS Certificate Troubleshooting

Certificate Not Issued

If you see a browser certificate error after deploying an ingress rule:

  1. Check DNS first. The domain must resolve to your gateway's public IP. Run PodWarden's DNS health check or dig +short yourdomain.com.
  2. Check port 80. Let's Encrypt needs port 80 for the HTTP-01 challenge. Verify port forwarding is configured.
  3. Check Traefik logs:
kubectl -n kube-system logs -l app.kubernetes.io/name=traefik --tail=200 | grep -i "acme\|letsencrypt\|certificate"
  1. Let's Encrypt rate limits. If you see rate limit errors, wait an hour. Avoid deploying and undeploying the same domain repeatedly.
  2. Cloudflare proxy. If the domain is proxied through Cloudflare (orange cloud), set SSL mode to "Full" so the HTTP-01 challenge can reach Traefik. Or switch to DNS-only mode.

Certificate Shows Wrong Domain

If the browser shows a certificate for *.example.com when you expect one for app.example.com, Traefik may be serving a cached wildcard cert. Check if you have a wildcard certificate configured in Traefik. For per-domain certs via Let's Encrypt, each ingress rule gets its own certificate automatically.

Self-Signed Certificate Warning

Traefik serves a default self-signed certificate before Let's Encrypt issues a real one. If you see "TRAEFIK DEFAULT CERT" in the certificate details, the ACME process has not completed. Check the steps above.

Complete Example: Proxmox Homelab

Here is the full sequence for a Proxmox-based deployment with 3 VMs:

VMIPRole
k3s-cp10.10.0.100Control plane + gateway
k3s-w110.10.0.101Worker
k3s-w210.10.0.102Worker

1. Prepare VMs: Install Ubuntu 24.04, set static IPs, configure SSH keys.

2. Install PodWarden on a management machine (or on k3s-cp itself).

3. Add hosts in PodWarden, probe all three.

4. Provision control plane on k3s-cp.

5. Join workers k3s-w1 and k3s-w2 to the cluster.

6. Label the gateway node:

kubectl label node k3s-cp node-role.kubernetes.io/gateway=true

7. Patch Traefik:

kubectl -n kube-system patch deployment traefik \
  --type=json \
  -p='[
    {"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}},
    {"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}
  ]'

8. Configure DNS: Create *.example.com A record pointing to your public IP.

9. Configure port forwarding on pfSense/VyOS for ports 80 and 443 to 10.10.0.100.

10. Enable gateway on k3s-cp in PodWarden.

11. Create ingress rules and deploy workloads.

Next Steps