Nginx vs Traefik vs Caddy: Best Reverse Proxy for Self-Hosting in 2026


Nginx vs Traefik vs Caddy: Best Reverse Proxy for Self-Hosting in 2026

Every self-hosted setup eventually hits the same problem: you have five services running on different ports, and you are tired of typing 192.168.1.50:8096 to watch a movie and 192.168.1.50:3000 to check your dashboards. You want clean URLs. You want HTTPS. You want one entry point that routes traffic to the right container based on the domain name.

That is what a reverse proxy does, and in 2026 there are three serious options: Nginx, the battle-tested veteran that powers a third of the internet; Traefik, the Docker-native upstart built for containers from the ground up; and Caddy, the elegant newcomer that made automatic HTTPS its entire identity.

Each one can do the job. Each one has tradeoffs that matter depending on your setup, your experience level, and how many services you are running. This guide puts all three side by side with real configuration examples, performance data, and honest recommendations so you can pick one and move on.

Table of Contents

TL;DR

  • Nginx is best for: experienced sysadmins, high-traffic production environments, setups that need maximum performance and granular control. Steepest learning curve, most documentation available.
  • Traefik is best for: Docker-heavy environments where services spin up and down frequently. Automatic service discovery eliminates manual config updates. Slight performance overhead is irrelevant for home lab traffic volumes.
  • Caddy is best for: everyone else. Automatic HTTPS with zero configuration, clean syntax, good enough performance for 99% of self-hosted setups. If you are reading this article because you have never set up a reverse proxy before, start here.
  • Nginx Proxy Manager exists as a GUI wrapper around Nginx, but it adds complexity, limits flexibility, and tends to break on upgrades. Learn the real tools instead.

Quick Comparison Table

FeatureNginxTraefikCaddy
LanguageCGoGo
First Release200420162015
LicenseBSD-2-ClauseMITApache 2.0
Automatic HTTPSNo (manual certbot)Yes (Let’s Encrypt)Yes (Let’s Encrypt + ZeroSSL)
Docker Service DiscoveryNoYes (built-in)Via plugin (caddy-docker-proxy)
Config FormatCustom directive syntaxYAML / TOML / CLI flagsCaddyfile (custom) or JSON
Hot Reloadnginx -s reloadAutomaticAutomatic (API)
DashboardNo (third-party)Yes (built-in web UI)No (API only)
Memory Usage (idle)~2-5 MB~30-50 MB~15-30 MB
Memory Usage (under load)~10-30 MB~80-150 MB~40-80 MB
Requests/sec (static)~50,000+~30,000+~35,000+
WebSocket SupportYes (manual config)Yes (automatic)Yes (automatic)
gRPC SupportYesYesYes
Rate LimitingYesYes (middleware)Yes (module)
Basic AuthYesYes (middleware)Yes
Load BalancingYes (advanced)YesYes
Health ChecksPaid (Nginx Plus)YesYes
Config ComplexityHighMediumLow

What Is a Reverse Proxy and Why You Need One

A reverse proxy sits between the internet and your services. When a request comes in for photos.yourdomain.com, the reverse proxy looks at the hostname, decides which backend service should handle it, forwards the request, and returns the response.

Without a reverse proxy, you have two bad options. You either expose each service on a different port (ugly, hard to remember, and many networks block non-standard ports) or you run everything on port 80/443 and have a routing problem.

A reverse proxy solves this by letting every service share ports 80 and 443. It handles TLS termination (so your backend services do not need to deal with certificates), adds security headers, and can do things like rate limiting, IP whitelisting, and basic authentication.

In a self-hosted Docker environment, a reverse proxy is not optional. It is infrastructure. The question is which one to use.

Nginx: The Reliable Workhorse

Nginx (pronounced “engine-x”) has been around since 2004. Igor Sysoev built it to solve the C10K problem — handling ten thousand concurrent connections on commodity hardware — and it did that so well that it now serves roughly 34% of all websites on the internet.

Nginx Architecture

Nginx uses an event-driven, asynchronous architecture with a master process and multiple worker processes. Each worker can handle thousands of simultaneous connections using non-blocking I/O. This is why Nginx uses so little memory compared to thread-per-connection servers like Apache.

The downside of this architecture is that Nginx does not natively support dynamic configuration. When you change the config, you reload the process. In practice this takes milliseconds and does not drop connections, but it means there is no built-in way to automatically detect new Docker containers and route to them.

Nginx Setup with Docker Compose

Here is a minimal Nginx reverse proxy configuration for routing to two services:

# docker-compose.yml
services:
  nginx:
    image: nginx:1.27-alpine
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    restart: unless-stopped
    networks:
      - proxy

  jellyfin:
    image: jellyfin/jellyfin:latest
    container_name: jellyfin
    networks:
      - proxy

  nextcloud:
    image: nextcloud:29-apache
    container_name: nextcloud
    networks:
      - proxy

networks:
  proxy:
    name: proxy

The main Nginx configuration:

# nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Logging
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent"';
    access_log /var/log/nginx/access.log main;

    # Performance
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    client_max_body_size 100M;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_types text/plain text/css application/json application/javascript
               text/xml application/xml text/javascript image/svg+xml;

    include /etc/nginx/conf.d/*.conf;
}

And the per-site configuration files:

# nginx/conf.d/jellyfin.conf
server {
    listen 80;
    server_name media.yourdomain.com;

    location / {
        proxy_pass http://jellyfin:8096;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
# nginx/conf.d/nextcloud.conf
server {
    listen 80;
    server_name cloud.yourdomain.com;

    client_max_body_size 10G;

    location / {
        proxy_pass http://nextcloud:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Adding TLS to Nginx with Certbot

Nginx does not handle TLS certificates on its own. You need certbot or a similar ACME client:

# Install certbot
sudo apt install certbot python3-certbot-nginx

# Obtain certificate (Nginx must be running and domain DNS must point to your server)
sudo certbot --nginx -d media.yourdomain.com -d cloud.yourdomain.com

# Auto-renewal is typically set up via cron or systemd timer
sudo certbot renew --dry-run

Certbot modifies your Nginx config files to add the SSL directives. This works, but it introduces another moving part that can break during updates, and you need to remember to renew certificates (certbot sets up a timer, but things go wrong).

Nginx Pros and Cons

Pros:

  • Fastest raw performance of the three by a meaningful margin
  • Lowest memory usage, especially under load
  • Most documentation, tutorials, and Stack Overflow answers of any web server
  • Battle-tested at enormous scale (Netflix, Cloudflare, and thousands of high-traffic sites)
  • Extremely stable — upgrades rarely break things
  • Highly configurable with granular control over every aspect of request handling

Cons:

  • No automatic TLS — you manage certificates separately
  • No Docker service discovery — every new service requires a config file and a reload
  • Configuration syntax is its own language with non-obvious semantics
  • WebSocket proxying requires explicit configuration that is easy to forget
  • Health checks for upstream servers require Nginx Plus (paid)
  • No built-in dashboard or API for dynamic configuration

Traefik: Built for Containers

Traefik (pronounced “traffic”) was built from the ground up for containerized environments. Its killer feature is automatic service discovery: Traefik watches the Docker socket for container events and automatically creates routing rules when new containers start. No config files to edit, no reloads to trigger.

Traefik Architecture

Traefik uses a provider model. Providers are sources of configuration — Docker, Kubernetes, Consul, file-based config, and more. The Docker provider watches for containers with specific labels and automatically generates routes, middleware chains, and TLS certificates.

This architecture means Traefik is inherently dynamic. You define routing rules as Docker labels on your containers, and Traefik picks them up instantly. When a container stops, the route disappears. When it starts again, the route comes back.

Traefik Setup with Docker Compose

# docker-compose.yml
services:
  traefik:
    image: traefik:v3.2
    container_name: traefik
    command:
      - "--api.dashboard=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge=true"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.letsencrypt.acme.email=you@yourdomain.com"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
      # Redirect all HTTP to HTTPS
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./letsencrypt:/letsencrypt
    labels:
      # Dashboard
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`traefik.yourdomain.com`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
      - "traefik.http.routers.dashboard.middlewares=auth"
      - "traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$xyz$$hashedpassword"
    restart: unless-stopped
    networks:
      - proxy

networks:
  proxy:
    name: proxy

Now, to add a service, you just add labels to its container definition:

# In any docker-compose.yml, on any project
services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    container_name: jellyfin
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.jellyfin.rule=Host(`media.yourdomain.com`)"
      - "traefik.http.routers.jellyfin.tls.certresolver=letsencrypt"
      - "traefik.http.services.jellyfin.loadbalancer.server.port=8096"
    networks:
      - proxy

  nextcloud:
    image: nextcloud:29-apache
    container_name: nextcloud
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.nextcloud.rule=Host(`cloud.yourdomain.com`)"
      - "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
      - "traefik.http.services.nextcloud.loadbalancer.server.port=80"
      - "traefik.http.routers.nextcloud.middlewares=nextcloud-headers"
      - "traefik.http.middlewares.nextcloud-headers.headers.stsSeconds=31536000"
      - "traefik.http.middlewares.nextcloud-headers.headers.customRequestHeaders.X-Forwarded-Proto=https"
    networks:
      - proxy

networks:
  proxy:
    external: true

That is it. No separate config files to write, no proxy reload to trigger. Start the container and Traefik picks it up, provisions a TLS certificate, and begins routing traffic. Stop the container and the route disappears.

Adding Middleware in Traefik

Traefik middleware is applied via labels. Here are some common patterns:

# Rate limiting
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"

# IP whitelist (only allow local network)
- "traefik.http.middlewares.local-only.ipallowlist.sourcerange=192.168.1.0/24,10.0.0.0/8"

# Basic auth
- "traefik.http.middlewares.myauth.basicauth.users=admin:$$apr1$$hash"

# Redirect regex
- "traefik.http.middlewares.redirect.redirectregex.regex=^https://old.example.com/(.*)"
- "traefik.http.middlewares.redirect.redirectregex.replacement=https://new.example.com/$${1}"

# Compress responses
- "traefik.http.middlewares.compress.compress=true"

# Chain multiple middleware
- "traefik.http.routers.myservice.middlewares=ratelimit,compress,myauth"

Traefik Pros and Cons

Pros:

  • Automatic Docker service discovery — the main reason people choose Traefik
  • Built-in Let’s Encrypt integration with automatic renewal
  • Built-in dashboard for monitoring routes, services, and middleware
  • Middleware system is powerful and composable
  • WebSocket and gRPC support works automatically without extra configuration
  • Excellent Kubernetes support with IngressRoute CRDs
  • Active development with frequent releases

Cons:

  • Higher memory usage than Nginx or Caddy (Go garbage collector, service discovery overhead)
  • Docker socket access is a security concern — a compromised Traefik container could control all your Docker containers
  • Label syntax is verbose and hard to debug (typo in a label = silent failure)
  • Configuration split across many docker-compose files can be hard to audit
  • Documentation is comprehensive but sometimes hard to navigate
  • Performance is measurably slower than Nginx for high-throughput static file serving
  • The v2 to v3 migration broke many existing configurations

Caddy: Automatic HTTPS, Zero Hassle

Caddy’s thesis is simple: HTTPS should be the default, and configuring a web server should not require a manual. It was the first major web server to provision TLS certificates automatically, and its configuration file (the Caddyfile) reads almost like English.

Caddy Architecture

Caddy is written in Go and uses a modular architecture. The core handles HTTP serving and TLS, and everything else is a module — reverse proxying, file serving, markdown rendering, authentication, rate limiting. Modules can be added at build time or loaded dynamically.

Under the hood, Caddy is configured via JSON. The Caddyfile is a human-friendly adapter that compiles down to this JSON. You can use either format, but 95% of users use the Caddyfile and never touch the JSON.

Caddy manages certificates from both Let’s Encrypt and ZeroSSL, with automatic fallback. It handles OCSP stapling, certificate renewal, and even provisions certificates for internal (non-public) hostnames using an internal CA. The TLS story is genuinely best-in-class.

Caddy Setup with Docker Compose

# docker-compose.yml
services:
  caddy:
    image: caddy:2.9-alpine
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"  # HTTP/3
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy_data:/data
      - ./caddy_config:/config
    restart: unless-stopped
    networks:
      - proxy

  jellyfin:
    image: jellyfin/jellyfin:latest
    container_name: jellyfin
    networks:
      - proxy

  nextcloud:
    image: nextcloud:29-apache
    container_name: nextcloud
    networks:
      - proxy

networks:
  proxy:
    name: proxy

And the Caddyfile:

# Caddyfile
media.yourdomain.com {
    reverse_proxy jellyfin:8096
}

cloud.yourdomain.com {
    reverse_proxy nextcloud:80

    request_body {
        max_size 10GB
    }

    header {
        Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
        X-Content-Type-Options "nosniff"
        X-Frame-Options "SAMEORIGIN"
        Referrer-Policy "strict-origin-when-cross-origin"
    }
}

That is the entire configuration. HTTPS is automatic. Certificates are provisioned from Let’s Encrypt on the first request. HTTP/2 and HTTP/3 are enabled by default. HTTP is automatically redirected to HTTPS. No certbot, no cron jobs, no certificate paths to manage.

Compare this to the Nginx setup above. The Caddy config is 15 lines. The equivalent Nginx setup is 60+ lines across multiple files plus a separate certbot installation. That difference compounds as you add more services.

Advanced Caddyfile Patterns

# Basic authentication
admin.yourdomain.com {
    basicauth {
        admin $2a$14$hashedpasswordhere
    }
    reverse_proxy admin-panel:3000
}

# Rate limiting (requires caddy-ratelimit plugin)
api.yourdomain.com {
    rate_limit {
        zone dynamic {
            key    {remote_host}
            events 100
            window 1m
        }
    }
    reverse_proxy api-server:8080
}

# Wildcard certificate with multiple subdomains
*.yourdomain.com {
    tls {
        dns cloudflare {env.CLOUDFLARE_API_TOKEN}
    }

    @media host media.yourdomain.com
    handle @media {
        reverse_proxy jellyfin:8096
    }

    @cloud host cloud.yourdomain.com
    handle @cloud {
        reverse_proxy nextcloud:80
    }

    @git host git.yourdomain.com
    handle @git {
        reverse_proxy gitea:3000
    }

    handle {
        abort
    }
}

# Load balancing between multiple backends
app.yourdomain.com {
    reverse_proxy app-1:8080 app-2:8080 app-3:8080 {
        lb_policy round_robin
        health_uri /health
        health_interval 30s
    }
}

# File server with directory listing
files.yourdomain.com {
    root * /srv/files
    file_server browse
    basicauth {
        user $2a$14$hashedpassword
    }
}

Caddy Docker Proxy Plugin

For Traefik-like Docker label support, there is the caddy-docker-proxy plugin:

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:2.9
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
    restart: unless-stopped
    networks:
      - proxy

  jellyfin:
    image: jellyfin/jellyfin:latest
    labels:
      caddy: media.yourdomain.com
      caddy.reverse_proxy: "{{upstreams 8096}}"
    networks:
      - proxy

This gives you Docker service discovery similar to Traefik, but with Caddy’s automatic TLS handling underneath.

Caddy Pros and Cons

Pros:

  • Automatic HTTPS with zero configuration — genuinely zero, not “just add these 5 lines”
  • HTTP/3 support enabled by default
  • Caddyfile syntax is the most readable config format of any web server
  • Excellent TLS implementation with OCSP stapling, automatic renewal, and fallback CAs
  • Lower memory usage than Traefik
  • Full JSON API for programmatic configuration changes
  • Active development with a responsive maintainer (Matt Holt)
  • Internal CA for issuing certificates to internal services

Cons:

  • No built-in Docker service discovery (requires plugin)
  • Smaller community than Nginx (fewer tutorials, fewer Stack Overflow answers)
  • Plugin ecosystem is smaller than Nginx’s module ecosystem
  • Not as battle-tested at extreme scale (though more than adequate for self-hosting)
  • The Caddyfile has some quirks with complex configurations that push you toward JSON
  • No built-in dashboard (you monitor via the API or external tools)

Head-to-Head: Configuration Complexity

Let us compare the exact same task across all three: reverse proxy to a service with HTTPS, WebSocket support, custom headers, and basic authentication.

Nginx (27 lines + certbot setup)

server {
    listen 443 ssl http2;
    server_name app.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/app.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.yourdomain.com/privkey.pem;

    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    add_header Strict-Transport-Security "max-age=31536000" always;
    add_header X-Content-Type-Options "nosniff" always;

    location / {
        proxy_pass http://app:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

server {
    listen 80;
    server_name app.yourdomain.com;
    return 301 https://$host$request_uri;
}

Plus you need to create the .htpasswd file, install certbot, run it to provision the certificate, and set up renewal.

Traefik (12 label lines, but verbose)

labels:
  - "traefik.enable=true"
  - "traefik.http.routers.app.rule=Host(`app.yourdomain.com`)"
  - "traefik.http.routers.app.tls.certresolver=letsencrypt"
  - "traefik.http.services.app.loadbalancer.server.port=3000"
  - "traefik.http.routers.app.middlewares=app-auth,app-headers"
  - "traefik.http.middlewares.app-auth.basicauth.users=admin:$$apr1$$hash"
  - "traefik.http.middlewares.app-headers.headers.stsSeconds=31536000"
  - "traefik.http.middlewares.app-headers.headers.contentTypeNosniff=true"

WebSocket support is automatic. TLS is automatic. But the label syntax is painful to write and debug.

Caddy (10 lines)

app.yourdomain.com {
    basicauth {
        admin $2a$14$hashedpassword
    }

    header {
        Strict-Transport-Security "max-age=31536000"
        X-Content-Type-Options "nosniff"
    }

    reverse_proxy app:3000
}

HTTPS is automatic. WebSocket proxying is automatic. HTTP to HTTPS redirect is automatic. HTTP/2 and HTTP/3 are automatic.

The difference is stark. Caddy’s configuration is roughly one-third the length of Nginx’s and significantly more readable than Traefik’s labels.

Performance Comparison

For most self-hosted setups, performance differences between these three are irrelevant. Your home lab is not serving 50,000 requests per second. But the data is worth examining because it reveals architectural tradeoffs.

These numbers represent approximate throughput for a simple reverse proxy workload (small JSON responses from a backend), measured using wrk with 100 concurrent connections on modern hardware:

MetricNginxTraefikCaddy
Requests/sec (JSON proxy)~45,000~28,000~32,000
Latency p500.8ms1.4ms1.1ms
Latency p993.2ms8.5ms5.1ms
Memory (idle)3 MB40 MB20 MB
Memory (10k concurrent)15 MB120 MB55 MB
CPU (10k concurrent)~8%~22%~14%
Static file serving~65,000 req/s~35,000 req/s~42,000 req/s
TLS handshake overheadMinimalModerateMinimal

Nginx wins on raw throughput because C is faster than Go for this workload, and its event-driven architecture has been optimized for two decades. Traefik is slowest because it does more work per request (service discovery checks, middleware chain evaluation, metrics collection). Caddy falls in between.

For self-hosting, these numbers are academic. If you are serving fewer than 1,000 requests per second — and you almost certainly are — all three perform identically in practice. The performance difference matters at scale, which is why Nginx dominates production infrastructure at companies like Netflix and Cloudflare. It does not matter for your Jellyfin and Nextcloud setup.

TLS and Certificate Management

This is where the three diverge most significantly.

Nginx: Manual Everything

Nginx has no built-in ACME client. You provision certificates externally (usually with certbot) and point Nginx at the certificate files. You are responsible for renewal, and if renewal fails silently (which happens), your sites go down with certificate errors.

You can mitigate this with scripts and monitoring, but it is an operational burden that the other two have eliminated entirely.

Traefik: Automatic via Resolvers

Traefik has a built-in ACME client that supports HTTP-01, TLS-ALPN-01, and DNS-01 challenges. You configure a certificate resolver once in the Traefik static config, and then reference it from any router. Certificates are provisioned automatically and stored in acme.json.

# Traefik static config
certificatesResolvers:
  letsencrypt:
    acme:
      email: you@yourdomain.com
      storage: /letsencrypt/acme.json
      httpChallenge:
        entryPoint: web

The DNS-01 challenge support is particularly useful for wildcard certificates and for services behind a firewall that cannot respond to HTTP challenges. Traefik supports dozens of DNS providers including Cloudflare, AWS Route 53, and Google Cloud DNS.

Caddy: Best in Class

Caddy manages TLS certificates with zero configuration. Point a domain at your server, add it to the Caddyfile, and Caddy handles the rest. It uses both Let’s Encrypt and ZeroSSL as certificate authorities with automatic failover. It handles OCSP stapling. It renews certificates well before expiration. It supports HTTP/3 with automatic QUIC certificate management.

For internal services that do not have public DNS, Caddy can act as its own certificate authority, issuing trusted certificates for internal hostnames. This is a feature no other reverse proxy offers out of the box.

# Internal service with auto-generated internal certificate
internal.home.lan {
    tls internal
    reverse_proxy homeassistant:8123
}

Verdict on TLS: Caddy wins. It is not close.

Docker and Container Integration

Nginx: No Native Integration

Nginx does not know Docker exists. You write config files, mount them into the container, and reload. If you add a new service, you write a new config file and reload Nginx. If you remove a service, you delete the config file and reload.

Tools like nginx-proxy (jwilder/nginx-proxy) and nginx-proxy-manager add Docker integration by watching for container events and generating Nginx configs automatically. These work but add another layer of complexity and another container that can fail.

Traefik: Native Docker Provider

This is Traefik’s strongest selling point. It watches the Docker socket directly and creates routes from container labels. Adding a new service is as simple as adding labels to your docker-compose.yml and starting the container. Traefik picks it up within seconds.

The tradeoff is security: Traefik needs access to the Docker socket (/var/run/docker.sock), which effectively gives it root-level access to your Docker host. You can mitigate this with a Docker socket proxy like Tecnativa’s docker-socket-proxy, which filters API calls:

services:
  dockerproxy:
    image: tecnativa/docker-socket-proxy
    environment:
      - CONTAINERS=1
      - SERVICES=0
      - TASKS=0
      - NETWORKS=0
      - VOLUMES=0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - proxy

  traefik:
    image: traefik:v3.2
    depends_on:
      - dockerproxy
    command:
      - "--providers.docker.endpoint=tcp://dockerproxy:2375"
      # ... rest of config
    networks:
      - proxy

Caddy: Plugin-Based Docker Support

Caddy’s Docker integration comes through the caddy-docker-proxy plugin. It works similarly to Traefik’s label system but with simpler label syntax. It is not as mature as Traefik’s Docker provider, but it covers the common use cases well.

For most self-hosters, the choice is between Traefik’s native discovery and Caddy’s static Caddyfile approach. If you add and remove services frequently, Traefik is more convenient. If your services are relatively stable, a Caddyfile that you edit occasionally is simpler and easier to audit.

Middleware, Plugins, and Extensibility

Nginx

Nginx has a massive module ecosystem, but modules are compiled into the binary at build time. The official nginx:alpine Docker image includes the most common modules, but if you need something exotic, you are building a custom image. Third-party modules include Lua scripting (OpenResty), ModSecurity WAF, GeoIP, and hundreds more.

Traefik

Traefik middleware is built-in and applied via labels or static config. Available middleware includes rate limiting, circuit breaking, IP whitelisting, basic auth, digest auth, forward auth (delegate authentication to an external service like Authelia or Authentik), headers manipulation, path stripping, retry logic, and compression.

The plugin system (Traefik Pilot, now called Traefik Hub) allows community plugins, but the ecosystem is small compared to Nginx.

Caddy

Caddy modules are added at build time using xcaddy, or you can use pre-built images that include popular modules. The module ecosystem includes DNS providers for TLS challenges, rate limiting, security headers, Cloudflare integration, Docker proxy support, and more.

# Build Caddy with custom modules
xcaddy build \
    --with github.com/caddy-dns/cloudflare \
    --with github.com/mholt/caddy-ratelimit \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2

Monitoring and Observability

FeatureNginxTraefikCaddy
Built-in DashboardNoYes (web UI)No
Prometheus MetricsVia exporterBuilt-inBuilt-in
Access LogsYes (file)Yes (stdout/file)Yes (structured JSON)
Tracing (OpenTelemetry)NoYesYes (module)
Health Check EndpointNoYes (/ping)No (but easy to add)

Traefik has the best built-in monitoring story with its web dashboard showing active routers, services, middleware, and health status. Caddy and Nginx both require external tools for visualization, but both export Prometheus metrics that work well with Grafana.

Community and Ecosystem

MetricNginxTraefikCaddy
GitHub Stars26k+ (mirror)52k+60k+
Stack Overflow Questions80,000+6,000+3,000+
Docker Hub Pulls1B+3B+500M+
Reddit Communityr/nginx (small)r/traefik (active)r/caddy (small)
Official Docs QualityGood (but dated)ComprehensiveExcellent
Tutorial AvailabilityAbundantGoodGrowing

Nginx has the largest ecosystem by far, simply because it has been around since 2004. If you have a problem with Nginx, someone has already asked about it on Stack Overflow. Traefik and Caddy have smaller but more focused communities, and both have excellent official documentation.

Common Pitfalls and Troubleshooting

Nginx Pitfalls

  1. Forgetting WebSocket headers. The Upgrade and Connection headers must be explicitly set for WebSocket proxying. Every new user hits this when Nextcloud or Home Assistant WebSocket connections fail.

  2. client_max_body_size defaults to 1MB. If file uploads fail silently, this is almost always the cause.

  3. Certbot renewal failing silently. Check systemctl status certbot.timer and test with certbot renew --dry-run periodically.

  4. Permission issues with config files. Nginx runs as root for the master process but workers run as nginx. File permissions matter.

Traefik Pitfalls

  1. Typos in labels cause silent failures. There is no validation of label names. traefik.http.routers.app.rule works; traefik.http.router.app.rule (missing ‘s’) silently does nothing.

  2. Docker networks. Traefik and your services must be on the same Docker network. If routing fails, check network connectivity first.

  3. acme.json permissions. The ACME storage file must have 600 permissions or Traefik refuses to start: chmod 600 acme.json.

  4. Rate limiting per-service conflicts. Middleware defined in one compose file can collide with middleware names in another.

Caddy Pitfalls

  1. DNS propagation. Caddy tries to provision certificates immediately. If DNS has not propagated yet, it will fail and retry on a backoff schedule. Be patient.

  2. Caddyfile formatting. The Caddyfile is whitespace-sensitive in places. Use caddy fmt to auto-format.

  3. Custom builds for plugins. If you need DNS challenge support or Docker proxy, you need a custom Caddy build. The default Docker image does not include these modules.

  4. Data volume persistence. Caddy stores certificates in /data. If this is not a persistent volume, certificates will be re-provisioned on every container restart, and you will hit Let’s Encrypt rate limits.

Verdict: Which One Should You Use?

Choose Nginx If:

  • You are already familiar with Nginx configuration and have muscle memory for the syntax
  • You are running a high-traffic production environment where every millisecond of latency matters
  • You need advanced features like complex URL rewriting, Lua scripting, or WAF integration
  • You are comfortable managing TLS certificates separately
  • You want maximum control over every aspect of request handling
  • You are deploying on bare metal without Docker

Choose Traefik If:

  • You run many Docker containers and add/remove services frequently
  • You want automatic service discovery without editing config files
  • You use Docker Compose for everything and want routing defined alongside service definitions
  • You appreciate having a built-in dashboard for monitoring
  • You are running a Kubernetes cluster (Traefik is an excellent ingress controller)
  • You are willing to accept slightly higher resource usage for operational convenience

Choose Caddy If:

  • You are new to reverse proxies and want the simplest possible setup
  • Automatic HTTPS is a high priority and you do not want to think about certificates ever
  • You have a relatively stable set of services that do not change frequently
  • You want clean, readable configuration files
  • You value HTTP/3 support
  • You want the best TLS implementation available without any effort
  • You are setting up your first home lab or self-hosted environment

The Honest Recommendation

For a new self-hosted setup in 2026, start with Caddy. The automatic HTTPS alone saves hours of initial setup and eliminates an entire category of operational failures (expired certificates). The Caddyfile is readable, the defaults are sensible, and the performance is more than sufficient.

If you find yourself managing 15+ Docker services and constantly adding new ones, consider switching to Traefik. The automatic service discovery genuinely reduces friction at scale.

If you are already an Nginx expert and have existing configs that work, there is no compelling reason to switch. Nginx continues to be fast, stable, and well-supported.

Do not use Nginx Proxy Manager. It wraps Nginx in a GUI that obscures what is happening, makes debugging harder, and tends to break on updates. If you want a GUI, Traefik’s built-in dashboard is a better option. But the best investment is learning to read and write config files directly — it is a skill that pays dividends across your entire infrastructure.

Final Thoughts

The reverse proxy landscape in 2026 is remarkably healthy. All three options are actively maintained, well-documented, and capable of handling anything a self-hoster will throw at them. The differences are real but contextual — what matters most depends on your experience level, your infrastructure, and your tolerance for configuration complexity.

The worst choice is no choice: running services on random ports without HTTPS because setting up a reverse proxy felt intimidating. Any of these three tools will have you up and running in under an hour, and once configured, they require almost no ongoing maintenance.

Pick one, set it up, and move on to the interesting part — the services themselves.