
Nginx Config

Craig Nielsen
February 23, 2026
Lets take a look at the high level nginx setup for one of our clients.
Nginx Server Setup: High-Level Overview
Table of Contents
nginx.conf
The main
nginx.conf file lives at /etc/nginx/nginx.conf and defines global behaviour for the Nginx process:
- Worker processes: set to "auto" (scales to available CPU cores)
- Worker connections: 768 per worker
- Gzip compression: enabled: see section below
- SSL: TLSv1.2 and TLSv1.3 enabled; server cipher preference on
- Virtual host loading: includes all files from
and/etc/nginx/conf.d/*.conf/etc/nginx/sites-enabled/*
Gzip Compression
When a browser sends a request it includes an
Accept-Encoding: gzip header to signal that it can handle compressed responses. With "gzip on" in nginx.conf, Nginx compresses the response body before sending it, and the browser decompresses it transparently on receipt.
This is useful because text-based assets - HTML, CSS, JavaScript, and JSON - often compress to 20–30% of their original size. A 100 KB JavaScript file might become ~25 KB over the wire, reducing both bandwidth consumption and page load time. Binary files (images, videos, already-compressed archives) gain little from gzip and are typically excluded.
The current config enables gzip at a basic level. The commented-out directives beneath it allow fine-grained control:
| Directive | Purpose |
|---|---|
| Adds a "Vary: Accept-Encoding" header so CDNs and caches store both compressed and uncompressed versions |
| Compression effort (1–9); higher = smaller files but more CPU; level 6 is a common sweet spot |
| Limits compression to specified MIME types - important to exclude binary formats that don't benefit |
| Skips compression for very small responses where the overhead isn't worth it |
These can be uncommented and tuned when optimising for production traffic.

sites-available vs sites-enabled
The two directories follow the standard Debian/Ubuntu Nginx convention:
| Directory | Purpose |
|---|---|
| Stores all server block configuration files, both active and inactive. This is the library of every site configured on the server. |
| Contains only the configurations that are currently active. Nginx reads from here at startup. |
Enabling and disabling sites
A site is enabled by creating a symbolic link from
sites-enabled pointing back to the file in sites-available. This means there is only ever one copy of the config — you are just controlling whether Nginx loads it.
# Enable a site sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/mysite # Disable a site (remove the symlink, leave the original intact) sudo rm /etc/nginx/sites-enabled/mysite # Reload Nginx to apply the change sudo nginx -t && sudo systemctl reload nginx
nginx -t performs a syntax check before the reload so a broken config does not bring the server down.
Static Sites
Static sites serve pre-built HTML/CSS/JS files directly from disk. No application server or proxy is needed — Nginx reads the files and sends them straight to the client.
Root directory convention: Place each site's files under
/var/www/<sitename>/
Example server block:
server { listen 80; server_name example.com www.example.com; root /var/www/example; index index.html; location / { try_files $uri $uri/ =404; } access_log /var/log/nginx/example_access.log; error_log /var/log/nginx/example_error.log; }
Key points:
- "root" points to the directory containing "index.html" and all assets
- "try_files $uri $uri/ =404" attempts to serve the exact file path, then a directory index, and returns 404 if neither exists
- A non-standard port (e.g. "8083") can be used when the site sits behind another proxy or is for internal use only
- Per-site log files make debugging straightforward
Proxied Services — Connection to the k3s Cluster
Some server blocks act as reverse proxies, forwarding incoming HTTP requests through to backend services running inside a k3s Kubernetes cluster on the internal network.
How requests reach the cluster
Nginx forwards to an internal IP address on a specific port. That port is exposed by a Kubernetes "Service" of type "LoadBalancer", managed by the cluster's load balancer controller. The general flow is:
Internet → Nginx (public-facing, port 80) → LoadBalancer IP:port (internal network) → Kubernetes Service → Pod(s) running the application
Identifying the load balancer in use
From a service definition (
kubectl get service <name> -o yaml), the key section to examine is:
status: loadBalancer: ingress: - ip: 192.168.2.6 ipMode: VIP
The "ipMode: VIP" value is characteristic of k3s's built-in load balancer controller, ServiceLB (previously known as Klipper). The k3s documentation confirms that when ServiceLB assigns an ingress IP from a node's external IP it sets "ipMode: VIP". The cluster IP falling in the "10.43.x.x" range is also consistent with k3s's default service CIDR, further pointing to a vanilla k3s installation with ServiceLB active.
To confirm definitively:
# Look for svclb DaemonSet pods — these are created by ServiceLB for each LoadBalancer service kubectl get daemonset -n kube-system | grep svclb # Alternatively, inspect a specific service kubectl describe service <service-name>
If "svclb-*" DaemonSet pods appear in
kube-system, ServiceLB is confirmed. If you instead see MetalLB or kube-vip pods in kube-system, a replacement controller has been installed.
How ServiceLB works
For each "Service" of type "LoadBalancer", ServiceLB creates a DaemonSet in the "kube-system" namespace. The DaemonSet runs a small pod on every eligible node using "hostPort", which binds the service port directly to the node's network interface. This makes the service reachable at the node's IP on that port — no external hardware or cloud provider required.
The service YAML shows the result of this clearly:
ports: - name: https nodePort: 32090 # Random high port allocated by Kubernetes port: 8003 # Port Nginx connects to (the LoadBalancer port) targetPort: 8000 # Port the application pod actually listens on
Traffic flow in detail:
- Nginx connects to
192.168.2.6:8003 - ServiceLB receives it via "hostPort" on the node
- Kubernetes routes it to the Service's "clusterIP"
- kube-proxy forwards it to a healthy pod on "targetPort 8000"
Standard proxy configuration
All proxied location blocks use the same set of headers:
proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $server_name; # WebSocket upgrade support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";
X-Forwarded-For and X-Real-IP pass the real client IP through to the backend so application logs show the originating address rather than the Nginx host IP. The Upgrade/Connection headers enable WebSocket connections where needed.
Catch-All / Default Behaviour
A final
server block catches any request that does not match a configured server_name:
server { listen 80; server_name _; return 444; # Close connection without response }
Returning
444 (an Nginx-specific non-standard code) drops the TCP connection immediately, giving no information to scanners or bots probing unknown hostnames.
Log Locations
Global and per-site logs are written to
/var/log/nginx/. The global access and error logs capture everything not overridden by a site-specific access_log or error_log directive. Per-site log files are defined inside each server block and are recommended for easier per-service debugging.