How to Back Up Docker Containers & Volumes: A Complete Guide for 2026
How to Back Up Docker Containers & Volumes: A Complete Guide for 2026
You have spent a weekend setting up your self-hosted stack. Nextcloud is syncing files. Vaultwarden is storing passwords. Immich is managing 50,000 photos. Jellyfin has your entire media library organized with custom metadata. Everything works perfectly.
Then a disk fails. Or you accidentally run docker compose down -v in the wrong directory. Or a container update corrupts a database. Or your kid trips over the power cable during a write operation. The result is the same: your data is gone, your configuration is gone, and that perfect weekend of work needs to be repeated from scratch — if you can even remember how you set everything up.
Backups are the least exciting part of running a homelab and the most important. This guide covers every method for backing up Docker containers, volumes, and Compose stacks. We start with simple manual approaches, build up to automated scripts, and finish with dedicated backup tools like Restic, Borg, and Duplicati. By the end, you will have a backup strategy that runs automatically and lets you recover from any failure.
TL;DR
- Docker volumes are your data. Containers are disposable. Images are downloadable. Volumes hold everything that matters.
docker commitsaves container state but is not a real backup strategy. Use it for quick snapshots, not long-term protection.- Tar-based volume backups are the simplest reliable method. Mount the volume in a temporary container, compress it, store the archive.
- Back up your docker-compose.yml files and .env files. These are your infrastructure-as-code. Without them, you cannot recreate your stack.
- Automate with cron + bash scripts for a zero-cost, zero-dependency solution.
- Use Restic or Borg for incremental, encrypted, deduplicated backups to local disks, NAS, or cloud storage.
- Use Duplicati if you want a web-based GUI for managing backups without writing scripts.
- Test your backups. A backup you have never restored is not a backup. It is a hope.
What Exactly Needs to Be Backed Up?
Before writing scripts, you need to understand what Docker actually stores and where.
Docker Volumes
This is where your data lives. When a Compose file maps a named volume like nextcloud_data:/var/www/html, the actual files are stored on your host at /var/lib/docker/volumes/nextcloud_data/_data/. This directory contains databases, uploaded files, configuration, and everything else your applications create.
This is the most critical thing to back up.
Bind Mounts
If your Compose file uses bind mounts like ./config:/app/config, the data is stored in whatever host directory you specified. These are just regular directories on your filesystem and can be backed up with any standard file-based backup tool.
Docker Compose Files
Your docker-compose.yml files define your entire infrastructure. They specify which images to use, what environment variables to set, which ports to expose, and how volumes are mapped. Losing these files means recreating your entire stack from memory or documentation.
Environment Files
Many Compose setups use .env files for secrets: database passwords, API keys, encryption keys. These are small text files, easy to overlook, and catastrophic to lose. If you lose the encryption key for your Vaultwarden database, the backup of that database is useless.
Container State (Usually Not Important)
The container filesystem — everything not in a volume or bind mount — is ephemeral. It gets recreated every time you pull a new image and restart the container. In most cases, you do not need to back this up. The exception is containers where someone (possibly you) installed something manually inside the running container instead of using a volume. This is bad practice, but it happens.
Method 1: Docker Commit (Quick Snapshots)
docker commit creates a new image from a running container’s current state. It captures the container’s filesystem (not volumes) as a new Docker image that you can save, tag, and reload later.
When to Use It
- You made manual changes inside a container and want to capture them before an update.
- You need a quick checkpoint before running a risky operation.
- You want to copy a configured container to another machine.
When NOT to Use It
- As your primary backup strategy.
docker commitdoes not capture volumes, which is where your actual data lives. - For database containers. A committed image captures the filesystem at an arbitrary moment. If the database was mid-write, the committed state may be corrupted.
How It Works
# Snapshot a running container
docker commit my-container my-container-backup:2026-02-17
# List your saved images
docker images | grep backup
# Save the image to a tar file
docker save my-container-backup:2026-02-17 | gzip > my-container-backup-2026-02-17.tar.gz
# Restore the image on the same or different machine
docker load < my-container-backup-2026-02-17.tar.gz
# Run a container from the backup image
docker run -d --name restored-container my-container-backup:2026-02-17
Practical Example: Snapshot All Running Containers
#!/bin/bash
# snapshot-containers.sh
# Creates a committed image of every running container
BACKUP_DIR="/backup/docker-snapshots"
DATE=$(date +%Y-%m-%d)
mkdir -p "$BACKUP_DIR"
for CONTAINER in $(docker ps --format '{{.Names}}'); do
echo "Committing $CONTAINER..."
docker commit "$CONTAINER" "${CONTAINER}-backup:${DATE}"
echo "Saving ${CONTAINER}-backup:${DATE} to tar..."
docker save "${CONTAINER}-backup:${DATE}" | gzip > "${BACKUP_DIR}/${CONTAINER}-${DATE}.tar.gz"
# Clean up the intermediate image
docker rmi "${CONTAINER}-backup:${DATE}" 2>/dev/null
echo "Done: ${BACKUP_DIR}/${CONTAINER}-${DATE}.tar.gz"
done
echo "All container snapshots saved to $BACKUP_DIR"
Remember: This does not back up volumes. Do not rely on this alone.
Method 2: Volume Backups with Tar
This is the bread and butter of Docker backup. The idea is simple: mount a Docker volume into a temporary container alongside a backup directory, then compress the volume contents into a tar archive.
Backing Up a Single Volume
# Back up a named volume to a tar.gz file
docker run --rm \
-v nextcloud_data:/source:ro \
-v /backup/docker-volumes:/backup \
alpine \
tar czf /backup/nextcloud_data-$(date +%Y-%m-%d).tar.gz -C /source .
What this does:
- Starts a temporary Alpine Linux container (tiny, ~5 MB).
- Mounts the
nextcloud_datavolume as read-only at/source. - Mounts your host backup directory at
/backup. - Compresses the entire volume into a timestamped tar.gz file.
- Removes the temporary container (
--rm).
Restoring a Volume from Backup
# Create a fresh volume (or use the existing one)
docker volume create nextcloud_data
# Restore from the tar.gz backup
docker run --rm \
-v nextcloud_data:/target \
-v /backup/docker-volumes:/backup:ro \
alpine \
sh -c "cd /target && tar xzf /backup/nextcloud_data-2026-02-17.tar.gz"
Backing Up All Volumes at Once
#!/bin/bash
# backup-all-volumes.sh
# Backs up every Docker volume to a timestamped tar.gz file
BACKUP_DIR="/backup/docker-volumes"
DATE=$(date +%Y-%m-%d_%H-%M)
mkdir -p "$BACKUP_DIR"
for VOLUME in $(docker volume ls --format '{{.Name}}'); do
echo "Backing up volume: $VOLUME"
docker run --rm \
-v "${VOLUME}:/source:ro" \
-v "${BACKUP_DIR}:/backup" \
alpine \
tar czf "/backup/${VOLUME}-${DATE}.tar.gz" -C /source .
SIZE=$(du -sh "${BACKUP_DIR}/${VOLUME}-${DATE}.tar.gz" | cut -f1)
echo " Done: ${VOLUME}-${DATE}.tar.gz ($SIZE)"
done
echo ""
echo "All volumes backed up to $BACKUP_DIR"
ls -lh "$BACKUP_DIR"/*-${DATE}.tar.gz
Handling Database Volumes Safely
Backing up a database volume while the database is running is risky. The tar archive might capture files mid-write, resulting in a corrupted backup. There are two approaches to handle this.
Option A: Stop the container first (safest, causes downtime)
#!/bin/bash
# backup-with-stop.sh
# Stops the container, backs up its volumes, then restarts
CONTAINER="nextcloud-db"
VOLUME="nextcloud_db_data"
BACKUP_DIR="/backup/docker-volumes"
DATE=$(date +%Y-%m-%d_%H-%M)
echo "Stopping $CONTAINER..."
docker stop "$CONTAINER"
echo "Backing up $VOLUME..."
docker run --rm \
-v "${VOLUME}:/source:ro" \
-v "${BACKUP_DIR}:/backup" \
alpine \
tar czf "/backup/${VOLUME}-${DATE}.tar.gz" -C /source .
echo "Starting $CONTAINER..."
docker start "$CONTAINER"
echo "Backup complete: ${BACKUP_DIR}/${VOLUME}-${DATE}.tar.gz"
Option B: Use the database’s native dump tool (no downtime)
# PostgreSQL dump
docker exec nextcloud-db pg_dumpall -U postgres | gzip > /backup/nextcloud-db-$(date +%Y-%m-%d).sql.gz
# MySQL/MariaDB dump
docker exec nextcloud-db mariadb-dump -u root --password="$DB_PASSWORD" --all-databases | gzip > /backup/nextcloud-db-$(date +%Y-%m-%d).sql.gz
# MongoDB dump
docker exec mongodb mongodump --archive | gzip > /backup/mongodb-$(date +%Y-%m-%d).archive.gz
# Redis dump (copy the RDB file)
docker exec redis redis-cli BGSAVE
sleep 2
docker cp redis:/data/dump.rdb /backup/redis-dump-$(date +%Y-%m-%d).rdb
Option B is almost always the better choice. Database dump tools produce consistent snapshots while the database is running. The dump can be imported into any compatible database instance, even a different version. Tar-based volume backups, by contrast, produce a binary copy of the data directory that is tightly coupled to the exact database version and may be inconsistent if the database was active.
Method 3: Docker Compose Backup Scripts
A complete backup is more than volumes. You need your Compose files, environment files, and any custom configuration. Here is a comprehensive script that backs up everything.
Full Stack Backup Script
#!/bin/bash
# docker-full-backup.sh
# Complete backup of all Docker Compose services, volumes, and configs
set -euo pipefail
# ========== CONFIGURATION ==========
SERVICES_DIR="$HOME/docker-services" # Where your docker-compose.yml files live
BACKUP_DIR="/backup/docker-full"
RETENTION_DAYS=30 # Delete backups older than this
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_PATH="${BACKUP_DIR}/${DATE}"
LOG_FILE="${BACKUP_DIR}/backup-${DATE}.log"
# ========== FUNCTIONS ==========
log() {
echo "[$(date '+%H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
backup_volume() {
local volume="$1"
log " Backing up volume: $volume"
docker run --rm \
-v "${volume}:/source:ro" \
-v "${BACKUP_PATH}/volumes:/backup" \
alpine \
tar czf "/backup/${volume}.tar.gz" -C /source . 2>/dev/null
local size=$(du -sh "${BACKUP_PATH}/volumes/${volume}.tar.gz" 2>/dev/null | cut -f1)
log " Volume $volume backed up ($size)"
}
backup_database() {
local container="$1"
local db_type="$2"
case "$db_type" in
postgres)
log " Dumping PostgreSQL from $container..."
docker exec "$container" pg_dumpall -U postgres 2>/dev/null | \
gzip > "${BACKUP_PATH}/databases/${container}-postgres.sql.gz"
;;
mysql|mariadb)
log " Dumping MySQL/MariaDB from $container..."
docker exec "$container" mariadb-dump -u root \
--password="${DB_ROOT_PASSWORD:-}" --all-databases 2>/dev/null | \
gzip > "${BACKUP_PATH}/databases/${container}-mysql.sql.gz"
;;
*)
log " Unknown database type: $db_type for $container"
;;
esac
}
# ========== MAIN ==========
mkdir -p "${BACKUP_PATH}"/{compose,volumes,databases,env}
log "Starting full Docker backup to ${BACKUP_PATH}"
# 1. Back up all docker-compose.yml and .env files
log "Step 1: Backing up Compose files and environment files..."
if [ -d "$SERVICES_DIR" ]; then
for service_dir in "$SERVICES_DIR"/*/; do
service_name=$(basename "$service_dir")
mkdir -p "${BACKUP_PATH}/compose/${service_name}"
# Copy docker-compose.yml
if [ -f "${service_dir}/docker-compose.yml" ]; then
cp "${service_dir}/docker-compose.yml" "${BACKUP_PATH}/compose/${service_name}/"
log " Copied ${service_name}/docker-compose.yml"
fi
# Copy .env file (if exists)
if [ -f "${service_dir}/.env" ]; then
cp "${service_dir}/.env" "${BACKUP_PATH}/env/${service_name}.env"
log " Copied ${service_name}/.env"
fi
# Copy any other .yml or .yaml config files
for config in "${service_dir}"/*.yml "${service_dir}"/*.yaml; do
if [ -f "$config" ] && [ "$(basename "$config")" != "docker-compose.yml" ]; then
cp "$config" "${BACKUP_PATH}/compose/${service_name}/"
log " Copied ${service_name}/$(basename "$config")"
fi
done
done
fi
# 2. Back up database containers first (with native dump tools)
log "Step 2: Backing up databases..."
for container in $(docker ps --format '{{.Names}}'); do
image=$(docker inspect --format '{{.Config.Image}}' "$container" 2>/dev/null)
case "$image" in
*postgres*)
backup_database "$container" "postgres"
;;
*mariadb*|*mysql*)
backup_database "$container" "mariadb"
;;
esac
done
# 3. Back up all named volumes
log "Step 3: Backing up Docker volumes..."
for volume in $(docker volume ls --format '{{.Name}}'); do
# Skip anonymous volumes (64-char hex strings)
if [[ ${#volume} -lt 64 ]] || [[ "$volume" =~ [^a-f0-9] ]]; then
backup_volume "$volume"
else
log " Skipping anonymous volume: ${volume:0:12}..."
fi
done
# 4. Save Docker network information
log "Step 4: Saving network configuration..."
docker network ls --format '{{.Name}}\t{{.Driver}}\t{{.Scope}}' > "${BACKUP_PATH}/networks.txt"
log " Network list saved"
# 5. Save list of running containers and their images
log "Step 5: Saving container inventory..."
docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' > "${BACKUP_PATH}/containers.txt"
docker images --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}' > "${BACKUP_PATH}/images.txt"
log " Container and image lists saved"
# 6. Create a single compressed archive of everything
log "Step 6: Creating final archive..."
cd "$BACKUP_DIR"
tar czf "docker-backup-${DATE}.tar.gz" "${DATE}/"
FINAL_SIZE=$(du -sh "docker-backup-${DATE}.tar.gz" | cut -f1)
log " Final archive: docker-backup-${DATE}.tar.gz ($FINAL_SIZE)"
# 7. Clean up the uncompressed directory
rm -rf "${BACKUP_PATH}"
log " Cleaned up temporary files"
# 8. Remove old backups
log "Step 7: Cleaning up old backups..."
find "$BACKUP_DIR" -name "docker-backup-*.tar.gz" -mtime +${RETENTION_DAYS} -delete
find "$BACKUP_DIR" -name "backup-*.log" -mtime +${RETENTION_DAYS} -delete
log " Removed backups older than ${RETENTION_DAYS} days"
log "Backup complete!"
Restoring from a Full Backup
#!/bin/bash
# docker-full-restore.sh
# Restore a full Docker backup
set -euo pipefail
BACKUP_FILE="$1"
RESTORE_DIR="/tmp/docker-restore"
SERVICES_DIR="$HOME/docker-services"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup-file.tar.gz>"
exit 1
fi
echo "Extracting backup..."
mkdir -p "$RESTORE_DIR"
tar xzf "$BACKUP_FILE" -C "$RESTORE_DIR"
# Find the date-stamped directory inside
BACKUP_DIR=$(find "$RESTORE_DIR" -maxdepth 1 -type d -not -path "$RESTORE_DIR" | head -1)
# 1. Restore Compose files and .env files
echo "Restoring Compose files..."
if [ -d "${BACKUP_DIR}/compose" ]; then
for service_dir in "${BACKUP_DIR}/compose"/*/; do
service_name=$(basename "$service_dir")
mkdir -p "${SERVICES_DIR}/${service_name}"
cp "${service_dir}"/* "${SERVICES_DIR}/${service_name}/"
echo " Restored ${service_name} Compose files"
done
fi
echo "Restoring .env files..."
if [ -d "${BACKUP_DIR}/env" ]; then
for env_file in "${BACKUP_DIR}/env"/*.env; do
service_name=$(basename "$env_file" .env)
mkdir -p "${SERVICES_DIR}/${service_name}"
cp "$env_file" "${SERVICES_DIR}/${service_name}/.env"
echo " Restored ${service_name}/.env"
done
fi
# 2. Restore volumes
echo "Restoring volumes..."
if [ -d "${BACKUP_DIR}/volumes" ]; then
for volume_archive in "${BACKUP_DIR}/volumes"/*.tar.gz; do
volume_name=$(basename "$volume_archive" .tar.gz)
echo " Restoring volume: $volume_name"
docker volume create "$volume_name" 2>/dev/null || true
docker run --rm \
-v "${volume_name}:/target" \
-v "$(dirname "$volume_archive"):/backup:ro" \
alpine \
sh -c "cd /target && tar xzf /backup/$(basename "$volume_archive")"
done
fi
# 3. Restore databases (manual step -- print instructions)
echo ""
echo "=========================================="
echo "Database dumps are in: ${BACKUP_DIR}/databases/"
echo "Restore them after starting the database containers:"
echo ""
if ls "${BACKUP_DIR}/databases"/*-postgres.sql.gz 1>/dev/null 2>&1; then
echo "PostgreSQL:"
for dump in "${BACKUP_DIR}/databases"/*-postgres.sql.gz; do
container=$(basename "$dump" -postgres.sql.gz)
echo " gunzip -c $dump | docker exec -i $container psql -U postgres"
done
fi
if ls "${BACKUP_DIR}/databases"/*-mysql.sql.gz 1>/dev/null 2>&1; then
echo "MySQL/MariaDB:"
for dump in "${BACKUP_DIR}/databases"/*-mysql.sql.gz; do
container=$(basename "$dump" -mysql.sql.gz)
echo " gunzip -c $dump | docker exec -i $container mariadb -u root -p"
done
fi
echo "=========================================="
# Clean up
rm -rf "$RESTORE_DIR"
echo "Restore complete!"
Method 4: Automated Scheduling with Cron
Scripts are only useful if they run automatically. Cron is the simplest way to schedule Docker backups.
Setting Up Cron Jobs
# Open the crontab editor
crontab -e
# Add one or more of these lines:
# Full backup every night at 2:00 AM
0 2 * * * /home/jerry/scripts/docker-full-backup.sh >> /var/log/docker-backup.log 2>&1
# Volume-only backup every 6 hours
0 */6 * * * /home/jerry/scripts/backup-all-volumes.sh >> /var/log/docker-volume-backup.log 2>&1
# Database dumps every hour
0 * * * * /home/jerry/scripts/backup-databases.sh >> /var/log/docker-db-backup.log 2>&1
A Smarter Cron Wrapper with Notifications
#!/bin/bash
# backup-cron-wrapper.sh
# Runs the backup script and sends a notification on failure
set -euo pipefail
SCRIPT="/home/jerry/scripts/docker-full-backup.sh"
LOG_DIR="/var/log/docker-backup"
DATE=$(date +%Y-%m-%d)
LOG_FILE="${LOG_DIR}/backup-${DATE}.log"
# Optional: Healthcheck URL (Uptime Kuma, Healthchecks.io, etc.)
HEALTHCHECK_URL="https://hc-ping.com/YOUR-UUID-HERE"
mkdir -p "$LOG_DIR"
# Run the backup
if bash "$SCRIPT" >> "$LOG_FILE" 2>&1; then
echo "[$(date)] Backup succeeded" >> "$LOG_FILE"
# Ping healthcheck on success
curl -fsS -m 10 --retry 5 "${HEALTHCHECK_URL}" > /dev/null 2>&1 || true
else
EXIT_CODE=$?
echo "[$(date)] Backup FAILED with exit code $EXIT_CODE" >> "$LOG_FILE"
# Ping healthcheck with failure
curl -fsS -m 10 --retry 5 "${HEALTHCHECK_URL}/fail" > /dev/null 2>&1 || true
# Optional: Send notification via ntfy.sh
curl -s -d "Docker backup failed on $(hostname) at $(date). Check $LOG_FILE" \
"https://ntfy.sh/your-backup-alerts" > /dev/null 2>&1 || true
fi
# Clean up old logs (keep 30 days)
find "$LOG_DIR" -name "backup-*.log" -mtime +30 -delete 2>/dev/null || true
Using Systemd Timers Instead of Cron
Systemd timers offer better logging, dependency management, and failure handling than cron. Here is how to set up a backup timer:
# /etc/systemd/system/docker-backup.service
[Unit]
Description=Docker Full Backup
After=docker.service
[Service]
Type=oneshot
User=jerry
ExecStart=/home/jerry/scripts/docker-full-backup.sh
StandardOutput=journal
StandardError=journal
# /etc/systemd/system/docker-backup.timer
[Unit]
Description=Run Docker backup nightly
[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=900
[Install]
WantedBy=timers.target
Enable the timer:
sudo systemctl daemon-reload
sudo systemctl enable --now docker-backup.timer
# Check timer status
systemctl list-timers docker-backup.timer
# View backup logs
journalctl -u docker-backup.service
Method 5: Restic (Incremental, Encrypted, Deduplicated)
Restic is a modern backup tool that handles everything the bash scripts above do, but better. It provides incremental backups (only changed data is stored), encryption (backups are encrypted at rest), deduplication (identical data blocks are stored once), and support for dozens of storage backends including local disks, SFTP, S3, Backblaze B2, and more.
Why Restic for Docker Backups
The tar-based approach works, but it has problems at scale:
- Full copies every time. Each backup is a complete archive. If your volumes total 100 GB, each backup is 100 GB (compressed). After a week, you have 700 GB of backups that are 99% identical.
- No encryption. Tar files are unencrypted. If you store them on a NAS or cloud storage, anyone who gains access can read your data.
- No integrity checking. You will not know a backup is corrupted until you try to restore it.
Restic solves all of these. A typical Restic backup of 100 GB of Docker volumes might use 105 GB for the initial backup and only 1-5 GB per additional daily backup (depending on how much data changed).
Installing Restic
# Ubuntu/Debian
sudo apt install restic
# Or download the latest binary
wget https://github.com/restic/restic/releases/latest/download/restic_0.17.3_linux_amd64.bz2
bunzip2 restic_*.bz2
chmod +x restic_*
sudo mv restic_* /usr/local/bin/restic
Setting Up a Restic Repository
# Initialize a local backup repository
restic init --repo /backup/restic-docker
# Or use an SFTP remote
restic init --repo sftp:user@nas.local:/backup/restic-docker
# Or use Backblaze B2
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
restic init --repo b2:your-bucket-name:docker-backup
Restic will ask for a password. Store this password securely. Without it, your backups are permanently inaccessible.
Backing Up Docker Volumes with Restic
#!/bin/bash
# restic-docker-backup.sh
# Incremental, encrypted backup of all Docker volumes using Restic
set -euo pipefail
export RESTIC_REPOSITORY="/backup/restic-docker"
export RESTIC_PASSWORD_FILE="/home/jerry/.restic-password"
MOUNT_DIR="/tmp/docker-volume-backup"
SERVICES_DIR="$HOME/docker-services"
mkdir -p "$MOUNT_DIR"
# 1. Export all Docker volumes to temporary directory
echo "Exporting Docker volumes..."
for VOLUME in $(docker volume ls --format '{{.Name}}'); do
# Skip anonymous volumes
if [[ ${#VOLUME} -ge 64 ]] && [[ ! "$VOLUME" =~ [^a-f0-9] ]]; then
continue
fi
VOLUME_DIR="${MOUNT_DIR}/volumes/${VOLUME}"
mkdir -p "$VOLUME_DIR"
docker run --rm \
-v "${VOLUME}:/source:ro" \
-v "${VOLUME_DIR}:/target" \
alpine \
sh -c "cp -a /source/. /target/"
echo " Exported: $VOLUME"
done
# 2. Dump databases
echo "Dumping databases..."
mkdir -p "${MOUNT_DIR}/databases"
for CONTAINER in $(docker ps --format '{{.Names}}'); do
IMAGE=$(docker inspect --format '{{.Config.Image}}' "$CONTAINER" 2>/dev/null)
case "$IMAGE" in
*postgres*)
docker exec "$CONTAINER" pg_dumpall -U postgres 2>/dev/null | \
gzip > "${MOUNT_DIR}/databases/${CONTAINER}-postgres.sql.gz"
echo " Dumped PostgreSQL: $CONTAINER"
;;
*mariadb*|*mysql*)
docker exec "$CONTAINER" mariadb-dump -u root \
--password="${DB_ROOT_PASSWORD:-}" --all-databases 2>/dev/null | \
gzip > "${MOUNT_DIR}/databases/${CONTAINER}-mysql.sql.gz"
echo " Dumped MariaDB/MySQL: $CONTAINER"
;;
esac
done
# 3. Copy Compose files
echo "Copying Compose files..."
if [ -d "$SERVICES_DIR" ]; then
cp -r "$SERVICES_DIR" "${MOUNT_DIR}/compose-files"
fi
# 4. Run Restic backup
echo "Running Restic backup..."
restic backup "$MOUNT_DIR" \
--tag docker \
--tag "$(date +%Y-%m-%d)" \
--verbose
# 5. Clean up temporary files
rm -rf "$MOUNT_DIR"
# 6. Apply retention policy
echo "Applying retention policy..."
restic forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--prune
# 7. Verify backup integrity
echo "Verifying backup..."
restic check
echo "Restic backup complete!"
Restoring from Restic
# List available snapshots
restic -r /backup/restic-docker snapshots
# Restore the latest snapshot to a directory
restic -r /backup/restic-docker restore latest --target /tmp/restic-restore
# Restore a specific snapshot
restic -r /backup/restic-docker restore abc123de --target /tmp/restic-restore
# Restore only specific paths (e.g., one volume)
restic -r /backup/restic-docker restore latest \
--target /tmp/restic-restore \
--include "/tmp/docker-volume-backup/volumes/nextcloud_data"
# Mount a backup as a filesystem (browse before restoring)
mkdir /tmp/restic-mount
restic -r /backup/restic-docker mount /tmp/restic-mount
# Browse the backup at /tmp/restic-mount, then unmount:
umount /tmp/restic-mount
Method 6: Borg Backup (Deduplication Champion)
Borg is the other major player in the deduplicated backup space. It predates Restic and is particularly strong at deduplication — Borg’s compression and dedup algorithms are some of the most space-efficient available. Where Restic emphasizes simplicity and cloud backend support, Borg emphasizes raw efficiency and local/SSH storage.
Installing Borg
# Ubuntu/Debian
sudo apt install borgbackup
# Verify installation
borg --version
Setting Up a Borg Repository
# Initialize a local repository with encryption
borg init --encryption=repokey /backup/borg-docker
# Initialize a remote repository over SSH
borg init --encryption=repokey ssh://user@nas.local/backup/borg-docker
Borg will ask for a passphrase. Back up the passphrase and the key. Run borg key export /backup/borg-docker and store the output securely.
Docker Backup Script with Borg
#!/bin/bash
# borg-docker-backup.sh
# Deduplicated, encrypted Docker backup using Borg
set -euo pipefail
export BORG_REPO="/backup/borg-docker"
export BORG_PASSPHRASE="your-secure-passphrase" # Or use BORG_PASSCOMMAND
MOUNT_DIR="/tmp/docker-volume-backup"
SERVICES_DIR="$HOME/docker-services"
DATE=$(date +%Y-%m-%d_%H-%M)
mkdir -p "$MOUNT_DIR"
# 1. Export volumes
echo "Exporting Docker volumes..."
for VOLUME in $(docker volume ls --format '{{.Name}}'); do
if [[ ${#VOLUME} -ge 64 ]] && [[ ! "$VOLUME" =~ [^a-f0-9] ]]; then
continue
fi
VOLUME_DIR="${MOUNT_DIR}/volumes/${VOLUME}"
mkdir -p "$VOLUME_DIR"
docker run --rm \
-v "${VOLUME}:/source:ro" \
-v "${VOLUME_DIR}:/target" \
alpine \
sh -c "cp -a /source/. /target/"
echo " Exported: $VOLUME"
done
# 2. Dump databases
echo "Dumping databases..."
mkdir -p "${MOUNT_DIR}/databases"
for CONTAINER in $(docker ps --format '{{.Names}}'); do
IMAGE=$(docker inspect --format '{{.Config.Image}}' "$CONTAINER" 2>/dev/null)
case "$IMAGE" in
*postgres*)
docker exec "$CONTAINER" pg_dumpall -U postgres 2>/dev/null | \
gzip > "${MOUNT_DIR}/databases/${CONTAINER}-postgres.sql.gz"
echo " Dumped: $CONTAINER (PostgreSQL)"
;;
*mariadb*|*mysql*)
docker exec "$CONTAINER" mariadb-dump -u root \
--password="${DB_ROOT_PASSWORD:-}" --all-databases 2>/dev/null | \
gzip > "${MOUNT_DIR}/databases/${CONTAINER}-mysql.sql.gz"
echo " Dumped: $CONTAINER (MariaDB/MySQL)"
;;
esac
done
# 3. Copy Compose files
if [ -d "$SERVICES_DIR" ]; then
cp -r "$SERVICES_DIR" "${MOUNT_DIR}/compose-files"
fi
# 4. Create Borg archive
echo "Creating Borg archive..."
borg create \
--verbose \
--stats \
--compression zstd,3 \
"::docker-{now}" \
"$MOUNT_DIR"
# 5. Clean up temp files
rm -rf "$MOUNT_DIR"
# 6. Prune old archives
echo "Pruning old archives..."
borg prune \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--stats
# 7. Compact the repository (free disk space from pruned archives)
borg compact
echo "Borg backup complete!"
Restoring from Borg
# List archives
borg list /backup/borg-docker
# Show archive contents
borg list /backup/borg-docker::docker-2026-02-17T02:00:00
# Restore an archive
cd /tmp
borg extract /backup/borg-docker::docker-2026-02-17T02:00:00
# Mount an archive for browsing
mkdir /tmp/borg-mount
borg mount /backup/borg-docker::docker-2026-02-17T02:00:00 /tmp/borg-mount
# Browse, then unmount:
borg umount /tmp/borg-mount
Restic vs Borg: Which to Choose?
| Feature | Restic | Borg |
|---|---|---|
| Cloud backends | S3, B2, Azure, GCS, SFTP, local | Local, SSH/SFTP only |
| Deduplication | Good | Excellent (best in class) |
| Compression | zstd (since 0.16) | lz4, zstd, zlib, lzma |
| Encryption | AES-256 (always on) | AES-256 (configurable) |
| Speed | Fast | Slightly faster for local |
| Memory usage | Higher | Lower |
| Restore granularity | File-level | File-level |
| FUSE mount | Yes | Yes |
| Written in | Go | Python/C |
| Best for | Cloud backups, simplicity | Local/SSH, maximum dedup |
Choose Restic if you want to back up to cloud storage (S3, Backblaze B2) or prefer a simpler setup. Choose Borg if you back up to a local disk or NAS over SSH and want the best space efficiency.
Method 7: Duplicati (GUI-Based Backup)
Not everyone wants to write bash scripts. Duplicati is a backup tool with a full web-based interface. You configure backups through a browser — select source directories, choose a destination, set a schedule, enable encryption — and Duplicati handles the rest.
Docker Setup for Duplicati
# ~/docker-services/duplicati/docker-compose.yml
services:
duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: duplicati
restart: unless-stopped
ports:
- "8200:8200"
volumes:
- duplicati_config:/config
- /var/lib/docker/volumes:/source/docker-volumes:ro
- /home/jerry/docker-services:/source/compose-files:ro
- /backup/duplicati:/backups
environment:
- PUID=0 # Root access needed to read Docker volumes
- PGID=0
- TZ=America/New_York
volumes:
duplicati_config:
mkdir -p ~/docker-services/duplicati
cd ~/docker-services/duplicati
# Save the docker-compose.yml above
docker compose up -d
Open http://your-server-ip:8200.
Configuring Duplicati for Docker Backups
- Click “Add backup” and select “Configure a new backup.”
- Give it a name like “Docker Volumes Backup.”
- Set encryption (AES-256 recommended). Choose a strong passphrase.
- Select your backup destination:
- Local folder:
/backups(mapped to/backup/duplication the host) - S3-compatible storage: Enter your endpoint, bucket, and credentials
- Backblaze B2: Enter your application key and bucket name
- SFTP: Enter your NAS connection details
- Local folder:
- Select source data:
/source/docker-volumesand/source/compose-files - Set the schedule (e.g., daily at 2:00 AM)
- Set retention (e.g., keep 7 daily, 4 weekly, 6 monthly backups)
- Save and run the first backup
Pros and Cons of Duplicati
Pros:
- Full web UI. No scripts, no cron, no command line.
- Supports virtually every cloud storage backend.
- Built-in encryption, compression, and deduplication.
- Email notifications for backup success/failure.
- Restore individual files through the web interface.
Cons:
- Heavier resource usage than command-line tools.
- The web UI can feel slow with large backup sets.
- Cannot perform database dumps. You still need a script or cron job to dump databases before Duplicati runs.
- Runs as a Docker container itself, which creates a chicken-and-egg problem (who backs up Duplicati?).
Building a Complete Backup Strategy
Individual methods are tools. A backup strategy is a plan. Here is how to combine the methods above into a reliable, layered approach.
The 3-2-1 Rule
The gold standard for backup strategies:
- 3 copies of your data (the original plus two backups)
- 2 different storage media (e.g., local disk + cloud, or NVMe + external HDD)
- 1 off-site copy (cloud storage, a friend’s NAS, or a remote server)
Recommended Strategy for a Homelab
| What | How | Frequency | Destination | Retention |
|---|---|---|---|---|
| Database dumps | Native dump tools via script | Every 6 hours | Local disk | 7 days |
| Docker volumes | Restic or Borg | Daily at 2 AM | Local NAS | 7 daily, 4 weekly, 6 monthly |
| Compose files + .env | Restic (same job as volumes) | Daily at 2 AM | Local NAS | Same as above |
| Off-site copy | Restic to B2 or S3 | Daily at 4 AM | Cloud storage | 7 daily, 4 weekly, 3 monthly |
| Full system image | Proxmox backup or Clonezilla | Weekly | External USB drive | 4 copies |
Implementation Checklist
Here is the practical order of operations to implement this strategy:
-
Start with database dumps. Write a script that dumps every database container, schedule it with cron every 6 hours. This takes 15 minutes and protects the data most vulnerable to corruption.
-
Add volume backups with Restic or Borg. Initialize a repository on your NAS or an external drive. Write the backup script, schedule it nightly. This takes 30 minutes.
-
Back up your Compose files to Git. Create a private repository (Gitea, GitHub, GitLab) and push your
docker-servicesdirectory to it. This gives you version history and off-site storage for your infrastructure configuration.cd ~/docker-services git init echo ".env" >> .gitignore # Don't commit secrets to Git git add . git commit -m "Initial commit of Docker Compose configurations" git remote add origin git@your-gitea-instance:jerry/docker-services.git git push -u origin main -
Add off-site backup. Create a Backblaze B2 or Wasabi account ($6/TB/month). Configure a second Restic repository targeting cloud storage. Schedule it to run a few hours after the local backup.
-
Set up monitoring. Use Healthchecks.io (free tier) or your self-hosted Uptime Kuma to monitor backup job execution. If a backup fails to run or fails to complete, you get an alert.
-
Test a restore. Pick a service, pretend it is gone, and restore it from backup. Do this once a month. Document the steps so you can follow them under stress.
Testing Your Backups
This section matters more than everything above it combined. A backup that has never been tested is not a backup.
Monthly Restore Drill
Pick one service per month and practice a full restore:
# 1. Create a test directory
mkdir -p /tmp/restore-test
# 2. List available backups
restic -r /backup/restic-docker snapshots
# 3. Restore the latest backup
restic -r /backup/restic-docker restore latest --target /tmp/restore-test
# 4. Verify the data
ls -la /tmp/restore-test/tmp/docker-volume-backup/volumes/
# Check that files are present and sizes look correct
# 5. For database dumps, try importing into a temporary container
docker run --rm -d --name test-postgres -e POSTGRES_PASSWORD=test postgres:16
sleep 5
gunzip -c /tmp/restore-test/tmp/docker-volume-backup/databases/nextcloud-db-postgres.sql.gz | \
docker exec -i test-postgres psql -U postgres
docker stop test-postgres
# 6. Clean up
rm -rf /tmp/restore-test
Automated Integrity Checks
# Add to your cron (weekly)
# Restic
0 6 * * 0 restic -r /backup/restic-docker check --read-data-subset=5% >> /var/log/restic-check.log 2>&1
# Borg
0 6 * * 0 borg check --verify-data /backup/borg-docker >> /var/log/borg-check.log 2>&1
Common Mistakes to Avoid
1. Backing up only volumes, not Compose files.
Your docker-compose.yml and .env files are your infrastructure definition. Without them, you have data but no way to reconstruct the services that use it. Back up everything.
2. Not dumping databases before volume backup.
A tar of a PostgreSQL data directory taken while the database is running may be corrupted. Always use pg_dumpall, mariadb-dump, or equivalent tools for database backups.
3. Storing backups on the same disk as the data. If the disk fails, you lose both the original and the backup. Always store backups on a physically separate device.
4. Never testing restores. The only way to know a backup works is to restore it. Schedule monthly restore drills.
5. Not encrypting off-site backups. If your backups contain passwords, personal photos, or sensitive documents and you store them in the cloud, encrypt them. Restic and Borg both encrypt by default. Tar does not.
6. Not monitoring backup jobs. A cron job that fails silently is worse than no backup at all, because you believe you are protected when you are not. Use health checks to confirm backups actually ran.
7. Forgetting the encryption passphrase. If you lose the passphrase for your Restic or Borg repository, your backups are gone. Store the passphrase in your password manager (Vaultwarden), and store a physical copy in a safe place.
Conclusion
Docker backup is not glamorous, but it is the difference between a minor inconvenience and a catastrophic loss. The container ecosystem makes deployment easy. It does not make data protection automatic.
The approach you choose depends on your comfort level and scale:
- Just starting out? Write a bash script that dumps databases and tars volumes. Schedule it with cron. This takes 30 minutes and covers 90% of failure scenarios.
- Want it done right? Use Restic with a local repository and a cloud repository. You get incremental, encrypted, deduplicated backups with a simple CLI. This takes an hour to set up.
- Prefer a GUI? Deploy Duplicati alongside your other services and configure everything through the web interface. Still write a script for database dumps.
- Running at scale? Borg for local dedup efficiency, Restic for cloud replication, systemd timers for scheduling, Healthchecks.io for monitoring.
The most important thing is to start. A simple tar backup running every night is infinitely better than a perfect Restic/Borg/Duplicati strategy that you plan to set up “next weekend.” Back up your data today. Test the restore tomorrow. Then improve incrementally.
Your future self, staring at a failed disk at 11 PM on a Sunday, will thank you.