Docker Security Best Practices: A Complete Guide for 2026
Docker Security Best Practices: A Complete Guide for 2026
Docker makes deploying software almost frictionless. That is the problem.
The same simplicity that lets you spin up a PostgreSQL instance with a single command also lets you run a container as root with full access to the host network, no resource limits, and secrets hardcoded in the image. Most Docker tutorials skip security entirely because it gets in the way of the “look how easy this is” narrative. The result is millions of containers running in production with configurations that would make a security auditor physically ill.
This guide fixes that. We are going to cover every layer of Docker security, from how you build images to how you run, network, and monitor containers. Each section includes concrete examples — bad configurations alongside their secure alternatives — so you can audit your own setup as you read.
This is not theoretical hand-waving. Every recommendation here is something you can implement today, and most of them add zero operational overhead once set up.
Table of Contents
- TL;DR: The Docker Security Checklist
- Threat Model: What Are You Defending Against?
- Image Security: Build It Right
- Image Scanning and Vulnerability Management
- Rootless Containers and User Namespaces
- Secrets Management: Stop Putting Passwords in Environment Variables
- Network Isolation: Not Everything Needs to Talk to Everything
- Resource Limits: Preventing Container Escape via Exhaustion
- Runtime Security: What Happens After the Container Starts
- Docker Socket Security: The Keys to the Kingdom
- Logging and Auditing
- Security Scanning in CI/CD
- Docker Compose Security Patterns
- Common Mistakes and How to Fix Them
- Final Thoughts
TL;DR: The Docker Security Checklist
If you are in a hurry, here is the checklist. Each item is explained in detail below.
- Use minimal base images (distroless, Alpine, or scratch)
- Run multi-stage builds to exclude build tools from final images
- Scan images for vulnerabilities before deployment (Trivy, Grype, or Snyk)
- Never run containers as root — use
USERin Dockerfiles and rootless Docker - Never put secrets in images, environment variables, or Docker Compose files
- Use Docker secrets, mounted files, or a secrets manager (Vault, Infisical)
- Create custom Docker networks and isolate services that do not need to communicate
- Set memory and CPU limits on every container
- Drop all Linux capabilities and add back only what you need
- Make the root filesystem read-only where possible
- Never mount the Docker socket into a container unless absolutely necessary
- Enable Docker Content Trust for image signing
- Use security profiles (seccomp, AppArmor) — Docker’s defaults are good, do not disable them
- Log container output centrally and monitor for anomalies
- Keep Docker Engine, images, and dependencies updated
Threat Model: What Are You Defending Against?
Before hardening anything, you need to understand the threats:
Container escape: An attacker inside a container exploits a kernel vulnerability or misconfiguration to gain access to the host system. This is the worst-case scenario and the one that gets security researchers published at conferences.
Vulnerable dependencies: Your application image contains libraries with known CVEs. An attacker exploits a vulnerability in a dependency you did not even know you were shipping.
Supply chain attacks: A base image or dependency is compromised at the source. You pull node:20 and it includes a backdoor because the registry or build pipeline was compromised.
Secrets exposure: Database passwords, API keys, and tokens leak through environment variables, image layers, logs, or process listings.
Lateral movement: An attacker compromises one container and uses the network access to attack other containers or services. Your Redis instance did not need to be reachable from your frontend container, but the default bridge network made everything talk to everything.
Resource exhaustion: A compromised or buggy container consumes all available CPU, memory, or disk, taking down other containers or the host itself.
Each section of this guide addresses one or more of these threats.
Image Security: Build It Right
Choose Minimal Base Images
Every package in your base image is an additional attack surface. A full Ubuntu image ships with curl, wget, apt, a shell, and hundreds of packages your application does not need. If an attacker gets code execution inside the container, all those tools help them pivot.
Bad: Full OS base image
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y python3 python3-pip
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . /app
CMD ["python3", "/app/main.py"]
This image includes a package manager, a shell, networking tools, and hundreds of libraries. The resulting image is 400-800MB.
Good: Minimal base image
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --target=/app/deps -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /app/deps /usr/local/lib/python3.12/site-packages/
COPY . .
USER 1000:1000
CMD ["python", "main.py"]
Best: Distroless image
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --target=/app/deps -r requirements.txt
FROM gcr.io/distroless/python3-debian12
WORKDIR /app
COPY --from=builder /app/deps /usr/local/lib/python3.12/site-packages/
COPY . .
USER 1000:1000
CMD ["main.py"]
Google’s distroless images contain only your application and its runtime dependencies. No shell, no package manager, no curl, no ls. If an attacker gets code execution, they have almost nothing to work with. The image is also 50-80% smaller.
For Go applications, you can go even further:
FROM golang:1.22 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /server .
FROM scratch
COPY --from=builder /server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER 1000:1000
ENTRYPOINT ["/server"]
The scratch base image is literally empty. Your final image contains only the compiled binary and CA certificates. The image might be 10-15MB total.
Multi-Stage Builds Are Not Optional
Multi-stage builds are the single most impactful security improvement you can make to a Dockerfile. They ensure your final image does not contain build tools, source code, test fixtures, or intermediate artifacts.
Bad: Single-stage build
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]
This image contains npm, the full Node.js development toolchain, your node_modules (including devDependencies), your source code, and your built output. The npm install step cached your npm credentials if you used a private registry.
Good: Multi-stage build
FROM node:20-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
RUN groupadd -r appuser && useradd -r -g appuser -s /bin/false appuser
COPY --from=builder --chown=appuser:appuser /app/dist ./dist
COPY --from=builder --chown=appuser:appuser /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appuser /app/package.json ./
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]
The final image has only production dependencies, built output, and nothing else.
Pin Your Base Image Versions
Bad:
FROM node:latest
Good:
FROM node:20.11.1-slim@sha256:abc123...
Using latest means your build is not reproducible and could pull a compromised image. Pinning to a specific version with a SHA256 digest ensures you get exactly the image you tested. Yes, this means you need to update the digest when you want to upgrade, but that is the point — upgrades should be deliberate, not accidental.
Use .dockerignore
Every COPY . . instruction copies everything in the build context into the image. Without a .dockerignore, that includes .git, .env files, node_modules, test data, and anything else in the directory.
# .dockerignore
.git
.gitignore
.env
.env.*
*.md
docker-compose*.yml
Dockerfile*
.dockerignore
node_modules
tests/
coverage/
.vscode/
.idea/
*.log
This is not just a size optimization. It prevents secrets in .env files from ending up in your image layers.
Image Scanning and Vulnerability Management
Why You Need Scanning
In 2026, the average Docker image contains 30-100 packages with known vulnerabilities. Most are low or medium severity and do not affect your specific use case. But some are critical — and you will not know unless you scan.
Trivy: The Standard
Trivy by Aqua Security has become the de facto standard for container image scanning. It is open source, fast, and integrates with everything.
# Scan a local image
trivy image myapp:latest
# Scan with severity filter
trivy image --severity HIGH,CRITICAL myapp:latest
# Scan and fail on critical vulnerabilities (for CI/CD)
trivy image --exit-code 1 --severity CRITICAL myapp:latest
# Scan a Dockerfile for misconfigurations
trivy config Dockerfile
# Scan a running container's filesystem
trivy rootfs /path/to/container/rootfs
Example output:
myapp:latest (debian 12.4)
Total: 3 (HIGH: 2, CRITICAL: 1)
+----------------+------------------+----------+-------------------+
| LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION |
+----------------+------------------+----------+-------------------+
| libssl3 | CVE-2024-XXXX | CRITICAL | 3.0.11-1 |
| libexpat1 | CVE-2024-YYYY | HIGH | 2.5.0-1 |
| zlib1g | CVE-2024-ZZZZ | HIGH | 1.2.13-1 |
+----------------+------------------+----------+-------------------+
Grype: The Alternative
Grype by Anchore is another excellent option. It has a slightly different vulnerability database and sometimes catches things Trivy misses (and vice versa). Running both is a reasonable approach for high-security environments.
# Install Grype
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
# Scan an image
grype myapp:latest
# Fail on high or critical
grype myapp:latest --fail-on high
Automated Scanning in CI/CD
Scanning should not be a manual step. Here is a GitHub Actions workflow (also works with Gitea Actions):
name: Security Scan
on:
push:
branches: [main]
pull_request:
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
format: table
exit-code: 1
severity: CRITICAL,HIGH
- name: Run Trivy config scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: config
scan-ref: .
exit-code: 1
What to Do About Vulnerabilities
Not every CVE requires action. Here is a practical triage approach:
- Critical + exploitable + in your attack surface: Fix immediately. Update the base image or the specific package.
- High + in a package your app actually uses: Fix in your next release cycle.
- Medium/Low, or in a package your app does not use: Track and fix when convenient.
- False positives: Trivy and Grype support
.trivyignoreand Grype ignore rules to suppress known false positives. Document why you are ignoring each finding.
Rootless Containers and User Namespaces
The Root Problem
By default, processes inside a Docker container run as root. This root user is the same root as on the host system, constrained only by namespaces and capabilities. If a container escape vulnerability exists, the attacker lands on the host as root.
Level 1: USER Instruction in Dockerfile
The simplest fix is adding a non-root user to your Dockerfile:
Bad:
FROM node:20-slim
WORKDIR /app
COPY . .
RUN npm ci
# Runs as root by default
CMD ["node", "server.js"]
Good:
FROM node:20-slim
WORKDIR /app
COPY . .
RUN npm ci
# Create a non-root user
RUN groupadd -r appuser && useradd -r -g appuser -d /app -s /bin/false appuser
RUN chown -R appuser:appuser /app
USER appuser
CMD ["node", "server.js"]
Or even simpler with numeric UIDs (which work even in distroless images):
USER 1000:1000
CMD ["node", "server.js"]
Level 2: Rootless Docker Mode
Rootless Docker runs the entire Docker daemon as a non-root user. This means even if a container escape occurs, the attacker lands as an unprivileged user on the host.
# Install rootless Docker (on a system with Docker already installed)
dockerd-rootless-setuptool.sh install
# Or install from scratch
curl -fsSL https://get.docker.com/rootless | sh
# Set environment variables (add to .bashrc)
export PATH=/home/youruser/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
Rootless mode has some limitations:
- Cannot bind to ports below 1024 (use a reverse proxy)
- Some storage drivers are not available
- No support for
--network host - Overlay filesystem requires kernel 5.11+ (not an issue in 2026)
For most self-hosting scenarios, these limitations do not matter.
Level 3: User Namespace Remapping
If rootless mode is too restrictive, user namespace remapping gives you a middle ground. The Docker daemon runs as root, but UID 0 inside the container maps to an unprivileged UID on the host.
// /etc/docker/daemon.json
{
"userns-remap": "default"
}
After restarting Docker, containers that think they are running as root (UID 0) are actually running as a high-numbered unprivileged user on the host. This breaks container escape because the “root” that lands on the host has no privileges.
Comparison
| Approach | Ease of Setup | Security Level | Compatibility |
|---|---|---|---|
USER in Dockerfile | Simple | Medium | Full |
| Rootless Docker | Moderate | High | Some limitations |
| User namespace remapping | Moderate | High | Most workloads |
USER + Rootless | Moderate | Highest | Some limitations |
Use all of them together for defense in depth. The USER instruction protects you even if someone runs your image without rootless mode.
Secrets Management: Stop Putting Passwords in Environment Variables
The Problem with Environment Variables
This is how most people handle secrets in Docker:
Bad: Secrets in docker-compose.yml
services:
app:
image: myapp:latest
environment:
- DATABASE_URL=postgres://admin:SuperSecret123@db:5432/myapp
- API_KEY=sk-live-abc123def456
- JWT_SECRET=my-jwt-secret-dont-look
Problems with this approach:
- Docker Compose files get committed to Git. Your passwords are now in version control forever.
- Environment variables are visible via
docker inspect. Anyone with Docker socket access can read every secret. - Environment variables appear in
/proc/1/environinside the container. Any process can read them. - Child processes inherit environment variables. If your app spawns a subprocess, it gets all your secrets.
- Logging frameworks often dump environment variables in debug output or crash reports.
Solution 1: Docker Secrets (Swarm Mode)
Docker Secrets is the built-in solution, but it requires Swarm mode. For Docker Compose, you can use file-based secrets:
services:
app:
image: myapp:latest
secrets:
- db_password
- api_key
environment:
- DATABASE_HOST=db
- DATABASE_USER=admin
- DATABASE_NAME=myapp
db:
image: postgres:16
secrets:
- db_password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
Secrets are mounted as files at /run/secrets/<secret_name> inside the container. Your application reads the file instead of an environment variable. The files are mounted on a tmpfs (in-memory filesystem) and are never written to disk.
The secrets/ directory should be in your .gitignore and .dockerignore.
Solution 2: Mounted Secret Files with Proper Permissions
If you do not want to use Docker Secrets, mount individual files with restricted permissions:
services:
app:
image: myapp:latest
volumes:
- ./secrets/db_password.txt:/run/secrets/db_password:ro
- ./secrets/api_key.txt:/run/secrets/api_key:ro
The :ro flag makes the mount read-only inside the container. Combine this with a non-root user so the process cannot modify or escalate through the secret files.
Solution 3: External Secrets Managers
For production environments, use a dedicated secrets manager:
HashiCorp Vault (self-hosted):
services:
vault:
image: hashicorp/vault:latest
cap_add:
- IPC_LOCK
environment:
- VAULT_ADDR=http://0.0.0.0:8200
volumes:
- vault-data:/vault/data
ports:
- "8200:8200"
Infisical (self-hosted, open source):
services:
infisical:
image: infisical/infisical:latest
environment:
- ENCRYPTION_KEY=your-encryption-key
ports:
- "8080:8080"
Both integrate with Docker via init containers or sidecar patterns that fetch secrets at runtime.
Solution 4: SOPS with Age Encryption
For smaller setups where a full secrets manager is overkill, Mozilla SOPS with Age encryption lets you store encrypted secrets in Git:
# Generate an Age key
age-keygen -o key.txt
# Encrypt a secrets file
sops --encrypt --age $(cat key.txt | grep "public key" | awk '{print $NF}') \
secrets.yaml > secrets.enc.yaml
# Decrypt at deploy time
sops --decrypt secrets.enc.yaml > secrets.yaml
The encrypted file is safe to commit. The private key stays on your deployment server and never enters version control.
Never Bake Secrets into Images
Bad:
ENV API_KEY=sk-live-abc123def456
Even if you unset the variable later in the Dockerfile, it exists in a previous layer. Anyone who pulls your image can extract it:
docker history myapp:latest
docker save myapp:latest | tar -xf -
# Each layer is a tarball you can inspect
Also bad: Using build arguments for secrets
ARG DB_PASSWORD
RUN echo "password=$DB_PASSWORD" > /etc/myapp.conf
Build arguments are visible in the image history. Use BuildKit’s --mount=type=secret instead:
# syntax=docker/dockerfile:1
RUN --mount=type=secret,id=db_password \
cat /run/secrets/db_password > /etc/myapp.conf
docker build --secret id=db_password,src=./secrets/db_password.txt .
The secret is available during the build step but is never written to any image layer.
Network Isolation: Not Everything Needs to Talk to Everything
The Default Bridge Problem
When you run docker compose up, all services in the Compose file land on the same default network. Every container can reach every other container. Your frontend, your database, your Redis cache, and your monitoring agent can all talk to each other.
This means if an attacker compromises your frontend container, they can directly access your database. That is bad.
Create Explicit Networks
Bad: Everything on one network
services:
frontend:
image: nginx:alpine
ports:
- "80:80"
api:
image: myapi:latest
db:
image: postgres:16
redis:
image: redis:7-alpine
Good: Segmented networks
services:
frontend:
image: nginx:alpine
ports:
- "80:80"
networks:
- frontend
api:
image: myapi:latest
networks:
- frontend
- backend
db:
image: postgres:16
networks:
- backend
redis:
image: redis:7-alpine
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
Now the frontend can only talk to the API. The API can talk to both the frontend network and the backend network. The database and Redis are on the backend network, which is marked internal: true — meaning containers on this network cannot reach the internet. If someone compromises the database container, they cannot phone home or download additional tools.
Disable Inter-Container Communication (ICC)
For even stricter isolation, you can disable ICC on a network:
// /etc/docker/daemon.json
{
"icc": false
}
With ICC disabled, containers can only communicate via explicitly published ports or linked services. This is a blunt instrument and breaks most Docker Compose setups, so use network segmentation instead for practical deployments.
Do Not Use Host Networking
# Bad - container shares the host's network stack
services:
app:
image: myapp:latest
network_mode: host
Host networking removes all network isolation. The container can see all host ports, bind to any interface, and sniff traffic. Only use it when absolutely necessary (some monitoring tools require it) and never for application containers.
Restrict Published Ports to Specific Interfaces
Bad:
ports:
- "5432:5432" # Exposes PostgreSQL to all interfaces, including the internet
Good:
ports:
- "127.0.0.1:5432:5432" # Only accessible from localhost
If your database only needs to be reached by other containers on the same Docker network, do not publish the port at all. Docker’s internal DNS handles container-to-container communication without publishing ports.
Resource Limits: Preventing Container Escape via Exhaustion
Memory Limits
A container without memory limits can consume all available host RAM, triggering the OOM killer and potentially taking down other containers or the host itself.
services:
app:
image: myapp:latest
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
Or with the older mem_limit syntax that works without Swarm:
services:
app:
image: myapp:latest
mem_limit: 512m
memswap_limit: 512m # Prevent using swap as a workaround
CPU Limits
services:
app:
image: myapp:latest
deploy:
resources:
limits:
cpus: "1.0" # Maximum 1 CPU core
reservations:
cpus: "0.25" # Guaranteed 0.25 cores
Disk and PID Limits
Limit the number of processes a container can spawn (prevents fork bombs):
services:
app:
image: myapp:latest
pids_limit: 100
For disk limits, use Docker’s storage driver options or --storage-opt size=10G if your storage driver supports it.
Restart Policies
A crash-looping container that restarts aggressively can itself become a denial-of-service attack on the host:
services:
app:
image: myapp:latest
restart: unless-stopped # Good default
# Or for more control:
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Runtime Security: What Happens After the Container Starts
Drop Capabilities
Linux capabilities are a fine-grained way to grant specific root-like powers. Docker grants a default set of capabilities to containers, including some you probably do not need.
Drop all and add back only what you need:
services:
app:
image: myapp:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if you need to bind to ports < 1024
The default capabilities Docker grants include CHOWN, DAC_OVERRIDE, FSETID, FOWNER, MKNOD, NET_RAW, SETGID, SETUID, SETFCAP, SETPCAP, NET_BIND_SERVICE, SYS_CHROOT, KILL, and AUDIT_WRITE. Most applications need almost none of these.
Read-Only Root Filesystem
If your application does not need to write to the filesystem, make it read-only:
services:
app:
image: myapp:latest
read_only: true
tmpfs:
- /tmp:size=100M
- /var/run:size=10M
volumes:
- app-data:/app/data # Only writable path
This prevents an attacker from writing scripts, downloading tools, or modifying the application binary. The tmpfs mounts provide writable scratch space that exists only in memory.
Security Options: Seccomp and AppArmor
Docker applies a default seccomp profile that blocks about 44 dangerous syscalls (including reboot, mount, swapon, and clock_settime). Never disable it:
Bad:
services:
app:
image: myapp:latest
security_opt:
- seccomp:unconfined # Disables seccomp protection
Good: Use the default or a custom profile
services:
app:
image: myapp:latest
security_opt:
- seccomp:./seccomp-profile.json # Custom restrictive profile
- apparmor:docker-default # AppArmor (default on Ubuntu)
You can generate a custom seccomp profile using tools like oci-seccomp-bpf-hook that traces your application’s actual syscall usage and creates a profile that allows only those specific calls.
No New Privileges
Prevent processes from gaining additional privileges through setuid binaries or capability elevation:
services:
app:
image: myapp:latest
security_opt:
- no-new-privileges:true
This should be enabled on every container that does not specifically need privilege escalation.
Docker Socket Security: The Keys to the Kingdom
Why the Docker Socket Is Dangerous
The Docker socket (/var/run/docker.sock) gives full control over the Docker daemon. Mounting it into a container is equivalent to giving that container root access to the host:
# From inside a container with the Docker socket mounted:
docker run -it --privileged --pid=host ubuntu nsenter -t 1 -m -u -i -n -- bash
# You now have a root shell on the host
When You Must Mount the Socket
Some tools legitimately need Docker socket access: Traefik (for auto-discovery), Portainer, Watchtower, and CI/CD runners. For these cases:
- Use a Docker socket proxy like Tecnativa’s docker-socket-proxy:
services:
socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- CONTAINERS=1 # Allow listing containers
- NETWORKS=1 # Allow listing networks
- SERVICES=0 # Deny service operations
- TASKS=0 # Deny task operations
- POST=0 # Deny all POST requests (read-only)
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- socket-proxy
traefik:
image: traefik:v3
environment:
- DOCKER_HOST=tcp://socket-proxy:2375
networks:
- socket-proxy
- web
# Note: no docker.sock mount!
networks:
socket-proxy:
internal: true
web:
The socket proxy exposes only the Docker API endpoints you explicitly enable. Traefik can read container labels but cannot create, modify, or delete containers.
- Mount read-only if a proxy is not an option:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
This provides some protection but is not foolproof — some Docker API calls work even through a read-only mount.
Logging and Auditing
Container Logging
Configure a logging driver that captures stdout/stderr from all containers:
// /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Without max-size, container logs grow indefinitely and can fill your disk. This is a security issue (disk exhaustion denial of service) and an operational issue.
For centralized logging, consider the syslog, fluentd, or loki logging drivers that ship logs to a central aggregation system.
Audit Docker Daemon Activity
On Linux systems with auditd, add rules to monitor Docker-related activity:
# /etc/audit/rules.d/docker.rules
-w /usr/bin/docker -p wa -k docker
-w /var/lib/docker -p wa -k docker
-w /etc/docker -p wa -k docker
-w /usr/lib/systemd/system/docker.service -p wa -k docker
-w /etc/default/docker -p wa -k docker
-w /etc/docker/daemon.json -p wa -k docker
-w /var/run/docker.sock -p wa -k docker
This creates audit log entries whenever anyone interacts with Docker binaries, configuration, or the socket.
Monitor for Suspicious Activity
Use Falco (open source runtime security) to detect anomalous container behavior:
services:
falco:
image: falcosecurity/falco:latest
privileged: true
volumes:
- /var/run/docker.sock:/host/var/run/docker.sock:ro
- /proc:/host/proc:ro
- /boot:/host/boot:ro
- /lib/modules:/host/lib/modules:ro
- ./falco-rules.yaml:/etc/falco/falco_rules.local.yaml
Falco detects events like:
- A shell is spawned inside a container
- A container reads sensitive host files
- An unexpected network connection is made
- A process runs with unexpected privileges
Security Scanning in CI/CD
A Complete Security Pipeline
Here is a comprehensive CI/CD pipeline that covers image security:
name: Security Pipeline
on:
push:
branches: [main]
pull_request:
jobs:
dockerfile-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hadolint/hadolint-action@v3
with:
dockerfile: Dockerfile
failure-threshold: warning
build-and-scan:
runs-on: ubuntu-latest
needs: dockerfile-lint
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Trivy vulnerability scan
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
format: sarif
output: trivy-results.sarif
severity: CRITICAL,HIGH
- name: Trivy config scan
uses: aquasecurity/trivy-action@master
with:
scan-type: config
scan-ref: .
format: table
exit-code: 1
- name: Grype scan
uses: anchore/scan-action@v4
with:
image: myapp:${{ github.sha }}
fail-build: true
severity-cutoff: high
- name: Dockle image lint
run: |
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
goodwithtech/dockle:latest myapp:${{ github.sha }}
This pipeline:
- Lints the Dockerfile with Hadolint (catches common mistakes)
- Scans the built image with Trivy (vulnerability database)
- Scans infrastructure files with Trivy (Dockerfile and Compose misconfigurations)
- Double-checks with Grype (different vulnerability database)
- Lints the image with Dockle (Docker CIS benchmark checks)
Docker Compose Security Patterns
The Fully Hardened Docker Compose Service
Here is what a production-hardened service looks like in Docker Compose:
services:
app:
image: myapp:1.2.3@sha256:abc123...
container_name: myapp
restart: unless-stopped
# User and privileges
user: "1000:1000"
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
read_only: true
# Resource limits
mem_limit: 512m
memswap_limit: 512m
pids_limit: 100
cpus: 1.0
# Filesystem
tmpfs:
- /tmp:size=50M,noexec,nosuid
volumes:
- app-data:/app/data
# Network
networks:
- backend
ports:
- "127.0.0.1:3000:3000"
# Secrets
secrets:
- db_password
- api_key
# Health check
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Logging
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
networks:
backend:
internal: true
volumes:
app-data:
This service:
- Uses a pinned image with digest verification
- Runs as a non-root user
- Cannot gain new privileges
- Has all capabilities dropped
- Has a read-only root filesystem with tmpfs for temporary files
- Has memory, swap, CPU, and PID limits
- Is on an internal-only network
- Publishes ports only on localhost
- Uses file-based secrets instead of environment variables
- Has a health check for monitoring
- Has log rotation configured
Common Mistakes and How to Fix Them
Mistake 1: Running Docker with --privileged
# NEVER do this unless you absolutely know why
services:
app:
image: myapp:latest
privileged: true
Privileged mode disables all security features: capabilities, seccomp, AppArmor, and device cgroup restrictions. The container has full access to the host. There are very few legitimate reasons to use this — Falco for security monitoring and some hardware-access containers are about it.
Mistake 2: Using latest Tag in Production
# Bad
image: postgres:latest
# Good
image: postgres:16.2-alpine@sha256:abc123...
The latest tag is mutable. It could point to a different image tomorrow. Pin to a specific version and digest for reproducibility and security.
Mistake 3: Not Updating Images
Pinning versions does not mean never updating. Set a schedule to review and update base images at least monthly. Use tools like Renovate Bot or Dependabot to automate dependency update PRs.
Mistake 4: Storing Data in Containers Instead of Volumes
Containers are ephemeral. If you are writing data inside the container filesystem, you lose it on restart and you also make the container’s writable layer grow, which hurts performance and security (larger attack surface).
Always use named volumes or bind mounts for persistent data.
Mistake 5: Ignoring Health Checks
Without health checks, Docker has no way to know if your application is actually working. A container can be “running” but completely broken. Health checks enable automatic restart of unhealthy containers and proper load balancing.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
For images without curl, use wget, a custom binary, or /dev/tcp:
healthcheck:
test: ["CMD-SHELL", "wget --spider -q http://localhost:3000/health || exit 1"]
Final Thoughts
Docker security is not a single configuration you apply once. It is a set of practices layered on top of each other:
- Build secure images: Minimal base, multi-stage builds, no secrets baked in, pinned versions.
- Scan everything: Vulnerability scanning in CI/CD, Dockerfile linting, config scanning.
- Run with least privilege: Non-root user, dropped capabilities, read-only filesystem, no-new-privileges.
- Manage secrets properly: Never in environment variables or Compose files, use Docker secrets or a secrets manager.
- Isolate networks: Separate frontend and backend networks, mark internal networks, bind to localhost.
- Limit resources: Memory, CPU, PIDs, and disk to prevent exhaustion attacks.
- Monitor runtime: Logging, auditing, and tools like Falco for anomaly detection.
- Protect the socket: Never mount it directly, use a proxy, and audit access.
None of these steps are difficult individually. The challenge is applying them consistently across every container in your stack. Start with the highest-impact items — non-root users, image scanning, and network isolation — and work your way through the rest.
Your containers are only as secure as the weakest one in your stack. One misconfigured container with the Docker socket mounted and running as root can undo every other precaution you have taken. Treat security as a property of the entire system, not individual containers, and build the habits to maintain it over time.