Networking, DNS & Ingress
Router configuration, DNS setup, Traefik customization, and the ndots problem — practical guide from real deployments
Overview
This guide covers the network path from the public internet to your workloads. It is a companion to the Ingress, Gateway Nodes & DDNS guide, which covers PodWarden's ingress features. This guide focuses on the infrastructure layer beneath: router configuration, DNS records, Traefik tuning, and common pitfalls discovered during real deployments.
Network Architecture
A typical self-hosted PodWarden deployment has this network layout:
- Router (pfSense, VyOS, or consumer router) — handles NAT and port forwarding
- Gateway node — a K3s node with Traefik that receives all inbound HTTP/HTTPS traffic
- Worker nodes — additional K3s nodes running workloads
- PodWarden bootstrap — the machine running PodWarden itself (can be a cluster node or separate)
All nodes are on the same LAN (e.g. 10.10.0.0/24). The gateway node has a predictable LAN IP that the router forwards ports to. The router's WAN interface has the public IP.
DNS Setup
Wildcard A Record
The simplest approach is a wildcard DNS record that sends all subdomains to your public IP. This avoids creating individual DNS records for each service.
In Cloudflare (or any DNS provider):
| Type | Name | Content | Proxy |
|---|---|---|---|
| A | *.example.com | 203.0.113.50 | DNS only (gray cloud) |
| A | example.com | 203.0.113.50 | DNS only (gray cloud) |
Replace 203.0.113.50 with your actual public IP. The wildcard covers all subdomains (app.example.com, grafana.example.com, etc.).
Note: Set the record to DNS only (gray cloud in Cloudflare), not proxied. Cloudflare's proxy can interfere with Let's Encrypt HTTP-01 challenges and makes PodWarden's DNS health checks show false mismatches.
Automatic DNS via PodWarden
If you register your domain in PodWarden's Domains page with Cloudflare credentials, PodWarden can automatically create A records when you create ingress rules. See Ingress, Gateway Nodes & DDNS for details.
DDNS for Dynamic IPs
If your public IP changes, configure DDNS in Settings > DDNS to keep records updated. PodWarden checks your IP every 5 minutes and updates Cloudflare, DuckDNS, or custom webhook providers. See the DDNS section for setup instructions.
Port Forwarding
Your router must forward ports 80 (HTTP) and 443 (HTTPS) from the WAN interface to the gateway node's LAN IP.
pfSense
- Navigate to Firewall > NAT > Port Forward
- Create two rules:
Rule 1 — HTTP:
| Field | Value |
|---|---|
| Interface | WAN |
| Protocol | TCP |
| Destination | WAN address |
| Destination Port | 80 |
| Redirect Target IP | 10.10.0.100 (gateway node LAN IP) |
| Redirect Target Port | 80 |
| Description | HTTP to K3s gateway |
Rule 2 — HTTPS:
| Field | Value |
|---|---|
| Interface | WAN |
| Destination Port | 443 |
| Redirect Target IP | 10.10.0.100 |
| Redirect Target Port | 443 |
| Description | HTTPS to K3s gateway |
- Save and apply.
VyOS
set nat destination rule 100 description 'HTTP to K3s gateway'
set nat destination rule 100 destination port '80'
set nat destination rule 100 inbound-interface name 'eth0'
set nat destination rule 100 protocol 'tcp'
set nat destination rule 100 translation address '10.10.0.100'
set nat destination rule 100 translation port '80'
set nat destination rule 110 description 'HTTPS to K3s gateway'
set nat destination rule 110 destination port '443'
set nat destination rule 110 inbound-interface name 'eth0'
set nat destination rule 110 protocol 'tcp'
set nat destination rule 110 translation address '10.10.0.100'
set nat destination rule 110 translation port '443'
commit
saveReplace eth0 with your WAN interface name and 10.10.0.100 with your gateway node's LAN IP.
Hairpin NAT Warning
Warning: Do not NAT traffic that originates from the LAN subnet. If a pod on the cluster makes an HTTPS request to a domain that resolves to your public IP, the traffic will go out to the router and get NAT'd back in. This creates a hairpin loop that can cause connection failures, especially with wildcard DNS records.
The problem: With a wildcard DNS record (*.example.com pointing to your public IP), every DNS lookup for any subdomain returns your public IP — including lookups from inside the cluster. When a pod resolves api.example.com, it gets the public IP, sends the request to the router, which NAT's it back to the gateway node. This works sometimes but fails under load or with certain connection tracking configurations.
The fix in pfSense:
Add an exclusion to each NAT rule so traffic from the LAN subnet is not forwarded:
- Edit each port forward rule
- Under Source, set Source Address to
! LAN net(invert match) - This ensures only WAN-originated traffic is forwarded
The fix in VyOS:
Explicitly match only the WAN source:
set nat destination rule 100 source address '!10.10.0.0/24'
set nat destination rule 110 source address '!10.10.0.0/24'
commit
saveFor traffic from inside the cluster to your own services, consider using Kubernetes internal DNS names (svc.cluster.local) instead of public domains.
Traefik Configuration
K3s installs Traefik as the default ingress controller via a HelmChart resource. PodWarden creates standard Kubernetes Ingress resources that Traefik picks up automatically.
Pinning Traefik to the Gateway Node
By default, K3s can schedule Traefik on any node. Since only the gateway node receives port-forwarded traffic from the router, Traefik must run on that specific node.
Step 1: Label the gateway node
kubectl label node k3s-gw node-role.kubernetes.io/gateway=trueStep 2: Patch the Traefik deployment with a nodeSelector
kubectl -n kube-system patch deployment traefik \
--type=json \
-p='[{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}}]'Note: You might expect to use a HelmChartConfig to set the nodeSelector, but in practice patching the deployment directly is more reliable. See the HelmChartConfig Limitations section below.
After patching, verify Traefik is running on the gateway node:
kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik -o wideLet's Encrypt ACME (HTTP Challenge)
Traefik obtains TLS certificates automatically via Let's Encrypt using the HTTP-01 challenge. This requires:
- Port 80 forwarded to the gateway node (for the ACME challenge)
- DNS records pointing to your public IP
- Traefik's certificate resolver configured (K3s does this by default)
Certificates are stored in /data/acme.json inside the Traefik pod. K3s persists this as a file on the node, so certificates survive pod restarts.
To check certificate status:
# View Traefik logs for ACME activity
kubectl -n kube-system logs -l app.kubernetes.io/name=traefik --tail=100 | grep -i acmeThe ndots:5 Problem
This is a subtle issue that affects outbound HTTPS from pods when you use wildcard DNS.
The problem:
Kubernetes sets ndots:5 in pod DNS configuration by default. This means any hostname with fewer than 5 dots is first tried with cluster search domains appended. When a pod resolves api.stripe.com (2 dots, less than 5), the resolver first tries:
api.stripe.com.default.svc.cluster.localapi.stripe.com.svc.cluster.localapi.stripe.com.cluster.localapi.stripe.com(the actual query, tried last)
With a wildcard DNS record for your domain, if any of the intermediate queries leak to your external DNS (which can happen with certain CoreDNS configurations), they may resolve to your public IP instead of failing. This causes Traefik to receive HTTPS requests intended for external services.
Symptoms:
- Pods get TLS errors when making outbound HTTPS requests
- Traefik logs show requests for unexpected domains
- External API calls from pods fail intermittently
The fix: Set ndots:2 on the Traefik deployment so its DNS lookups bypass the search domain expansion:
kubectl -n kube-system patch deployment traefik \
--type=json \
-p='[{"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}]'Note: This fix applies specifically to Traefik. Most workload pods are not affected because their outbound traffic goes directly to the destination, not through Traefik. However, if you see DNS-related issues in other pods, applying
ndots:2via a pod'sdnsConfigis the same fix.
HelmChartConfig Limitations
K3s provides a HelmChartConfig CRD to customize Helm charts deployed by K3s (like Traefik). In theory, you can set nodeSelector and dnsConfig through it:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
nodeSelector:
node-role.kubernetes.io/gateway: "true"
dnsConfig:
options:
- name: ndots
value: "2"Warning: In practice, HelmChartConfig's
dnsConfigdoes not propagate to the Traefik pod spec reliably. The nodeSelector via HelmChartConfig generally works, butdnsConfigmay be silently ignored depending on the Traefik Helm chart version. Usekubectl patch deploymentdirectly for both settings to be safe.
Combined Traefik Patch
Apply both the nodeSelector and dnsConfig fixes in a single patch:
kubectl -n kube-system patch deployment traefik \
--type=json \
-p='[
{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}},
{"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}
]'Verify the patch was applied:
kubectl -n kube-system get deployment traefik -o jsonpath='{.spec.template.spec.nodeSelector}'
# Expected: {"node-role.kubernetes.io/gateway":"true"}
kubectl -n kube-system get deployment traefik -o jsonpath='{.spec.template.spec.dnsConfig}'
# Expected: {"options":[{"name":"ndots","value":"2"}]}Creating Ingress Rules in PodWarden
With DNS and port forwarding configured, create ingress rules in PodWarden to route traffic to your workloads.
-
Enable the gateway. Go to Hosts, select the gateway node, and toggle Enable as Gateway Node. PodWarden detects the public IP automatically.
-
Create an ingress rule. Go to Ingress > New Rule:
- Domain: e.g.
grafana.example.com - Backend: select a deployed workload or enter a manual IP:port
- Gateway Host: select your gateway node
- TLS: enabled (Let's Encrypt)
- Domain: e.g.
-
Deploy the rule. Click the deploy button. PodWarden creates the Kubernetes Ingress and Service resources.
-
Verify. Use the health check buttons (DNS, HTTP, TLS) to confirm everything works end-to-end.
See Ingress, Gateway Nodes & DDNS for the full reference on backend types, multi-path routing, HTTPS backends, and troubleshooting.
Configuring Domains in PodWarden
Register your domains in PodWarden for automatic DNS record management:
- Go to Settings > Domains (or the Domains section in the sidebar)
- Click Add Domain
- Enter the domain name (e.g.
example.com) - Optionally add Cloudflare credentials (Zone ID + API Token with
Zone:DNS:Editpermission) - Save
With Cloudflare credentials, PodWarden can automatically create and update A records when you create ingress rules. Without credentials, you manage DNS manually.
DDNS Setup
If your ISP assigns a dynamic public IP, configure DDNS to keep DNS records current:
- Go to Settings > DDNS
- Click Add Config
- Select your provider (Cloudflare, DuckDNS, Webhook, or Hub)
- Enter the required credentials
- PodWarden checks your IP every 5 minutes and updates records on change
See the DDNS section for provider-specific configuration.
TLS Certificate Troubleshooting
Certificate Not Issued
If you see a browser certificate error after deploying an ingress rule:
- Check DNS first. The domain must resolve to your gateway's public IP. Run PodWarden's DNS health check or
dig +short yourdomain.com. - Check port 80. Let's Encrypt needs port 80 for the HTTP-01 challenge. Verify port forwarding is configured.
- Check Traefik logs:
kubectl -n kube-system logs -l app.kubernetes.io/name=traefik --tail=200 | grep -i "acme\|letsencrypt\|certificate"- Let's Encrypt rate limits. If you see rate limit errors, wait an hour. Avoid deploying and undeploying the same domain repeatedly.
- Cloudflare proxy. If the domain is proxied through Cloudflare (orange cloud), set SSL mode to "Full" so the HTTP-01 challenge can reach Traefik. Or switch to DNS-only mode.
Certificate Shows Wrong Domain
If the browser shows a certificate for *.example.com when you expect one for app.example.com, Traefik may be serving a cached wildcard cert. Check if you have a wildcard certificate configured in Traefik. For per-domain certs via Let's Encrypt, each ingress rule gets its own certificate automatically.
Self-Signed Certificate Warning
Traefik serves a default self-signed certificate before Let's Encrypt issues a real one. If you see "TRAEFIK DEFAULT CERT" in the certificate details, the ACME process has not completed. Check the steps above.
Complete Example: Proxmox Homelab
Here is the full sequence for a Proxmox-based deployment with 3 VMs:
| VM | IP | Role |
|---|---|---|
k3s-cp | 10.10.0.100 | Control plane + gateway |
k3s-w1 | 10.10.0.101 | Worker |
k3s-w2 | 10.10.0.102 | Worker |
1. Prepare VMs: Install Ubuntu 24.04, set static IPs, configure SSH keys.
2. Install PodWarden on a management machine (or on k3s-cp itself).
3. Add hosts in PodWarden, probe all three.
4. Provision control plane on k3s-cp.
5. Join workers k3s-w1 and k3s-w2 to the cluster.
6. Label the gateway node:
kubectl label node k3s-cp node-role.kubernetes.io/gateway=true7. Patch Traefik:
kubectl -n kube-system patch deployment traefik \
--type=json \
-p='[
{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"node-role.kubernetes.io/gateway":"true"}},
{"op":"add","path":"/spec/template/spec/dnsConfig","value":{"options":[{"name":"ndots","value":"2"}]}}
]'8. Configure DNS: Create *.example.com A record pointing to your public IP.
9. Configure port forwarding on pfSense/VyOS for ports 80 and 443 to 10.10.0.100.
10. Enable gateway on k3s-cp in PodWarden.
11. Create ingress rules and deploy workloads.
Next Steps
- Infrastructure Setup Guide — initial PodWarden installation and cluster provisioning
- Storage & Backup Configuration — Longhorn, NFS, S3, and backup policies
- Networking — network types, deploy-time checks, mixed-network clusters