Kubernetes vs Docker Swarm vs Nomad: Container Orchestration for Homelabs 2026

kubernetesdocker swarmnomadcontainer orchestration

Kubernetes vs Docker Swarm vs Nomad: Container Orchestration for Homelabs in 2026

You have three or four machines in a closet, a handful of Docker containers running your media server, your password manager, and a few other services. Everything works until one of the machines reboots after a kernel update and half your stack goes down because you forgot which containers run where. You start thinking about orchestration. You want something that keeps your containers running, spreads them across nodes, handles failover, and does not require a full-time SRE to maintain.

In 2026, three container orchestration platforms dominate the conversation: Kubernetes (K8s), the industry juggernaut that runs most of the cloud; Docker Swarm, the built-in clustering mode that ships with every Docker installation; and HashiCorp Nomad, the flexible scheduler that can orchestrate containers, VMs, and raw binaries without the overhead of a full Kubernetes cluster.

Each one can solve the problem. Each one comes with tradeoffs that look very different when your “cluster” is three mini PCs on a shelf instead of a thousand nodes in a data center. This guide compares all three from the perspective of someone running a homelab or a small business, not an enterprise team with a dedicated platform engineering department.

Table of Contents

TL;DR

  • Kubernetes (via K3s or Talos Linux) is the right choice if you want to learn the industry standard, plan to run 20+ services, or want access to the massive Helm chart ecosystem. It is the most complex option but also the most capable.
  • Docker Swarm is the right choice if you already know Docker Compose, want orchestration in under five minutes, and are running fewer than 15 services across 2-5 nodes. It is the easiest to set up but has the smallest ecosystem and fewest features.
  • Nomad is the right choice if you want something more capable than Swarm without the complexity of Kubernetes, need to orchestrate non-container workloads, or value operational simplicity. It sits in the sweet spot between the other two.

Quick Comparison Table

FeatureKubernetes (K3s)Docker SwarmNomad
First Release20142016 (Swarm mode)2015
Developed ByCNCF / Google originDocker IncHashiCorp
LanguageGoGoGo
LicenseApache 2.0Apache 2.0BSL 1.1 (open-source for personal use)
Min Nodes111
Control Plane RAM~512 MB (K3s)~50 MB (built into Docker)~100 MB
Learning CurveSteepGentleModerate
Config FormatYAML manifests / HelmDocker Compose YAMLHCL (HashiCorp Config Language)
Service DiscoveryCoreDNS (built-in)Built-in DNSConsul (separate) or built-in
Secret ManagementBuilt-in (etcd)Built-in (encrypted Raft)Vault integration or built-in
Auto-scalingYes (HPA, VPA, KEDA)NoYes (with external autoscaler)
Rolling UpdatesYes (fine-grained)Yes (basic)Yes (canary, blue/green)
Non-Container WorkloadsLimited (via KubeVirt)NoYes (exec, Java, QEMU, raw_exec)
GUI DashboardMany options (Lens, Rancher, K9s)PortainerNomad UI (built-in)
Helm/Package EcosystemMassive (thousands of charts)NoneGrowing (Nomad Pack)
Community SizeEnormousSmall and shrinkingModerate
Homelab PopularityVery high (via K3s)ModerateGrowing

What Container Orchestration Actually Does for You

Before diving into the three options, it is worth being clear about what problem you are solving. Container orchestration gives you:

Scheduling. You tell the orchestrator “run three copies of this container” and it figures out which nodes have enough CPU and RAM to place them. You stop thinking about which machine runs what.

Self-healing. When a container crashes or a node goes offline, the orchestrator notices and restarts the workload somewhere else. Your services stay up without you getting paged at 2 AM.

Service discovery and load balancing. Containers can find each other by name instead of hardcoded IP addresses. Traffic gets distributed across healthy instances automatically.

Rolling updates. You push a new image version and the orchestrator replaces containers one at a time, rolling back if health checks fail. No downtime during updates.

Declarative configuration. You describe what you want the system to look like, and the orchestrator makes it happen. If someone deletes a container, the orchestrator recreates it. If a node comes back online after a reboot, it rejoins the cluster and picks up workloads.

For a homelab with 5-10 containers on a single machine, you do not need any of this. Docker Compose on one box is fine. Orchestration starts paying for itself when you have multiple nodes and you want reliability without manual intervention.

Kubernetes was designed by Google to orchestrate containers at planet scale. It is the standard that every cloud provider implements, every DevOps job listing requires, and every CNCF project integrates with. It is also massive. Full upstream Kubernetes was never designed for a three-node homelab.

That is where lightweight distributions come in. In 2026, the two dominant options for homelabs are:

  • K3s by SUSE (formerly Rancher Labs): strips out cloud provider code, replaces etcd with embedded SQLite (single-node) or embedded etcd (multi-node), and ships as a single binary under 100 MB. This is what most homelab Kubernetes users run.
  • Talos Linux by Sidero Labs: a purpose-built Linux distro that runs nothing except Kubernetes. No SSH, no shell, no package manager. You manage it entirely through an API. Extremely secure and immutable, but more opinionated.

What Makes K8s Shine in a Homelab

The ecosystem is the killer feature. Want to deploy Jellyfin, Nextcloud, Home Assistant, and 20 other apps? There are Helm charts for all of them. Want GitOps? Install Flux or ArgoCD and manage your entire cluster through a Git repository. Want monitoring? The kube-prometheus-stack Helm chart gives you Prometheus, Grafana, and Alertmanager in one command.

No other orchestrator comes close to this breadth of tooling.

What Makes K8s Painful in a Homelab

The abstraction layers are deep. You need to understand Pods, Deployments, Services, Ingress, PersistentVolumeClaims, ConfigMaps, Secrets, Namespaces, and ServiceAccounts before you can deploy a basic web app. Each concept is well-documented, but the sheer volume of concepts creates a steep learning curve.

Debugging is harder than with plain Docker. When a container does not start, you are running kubectl describe pod, checking events, reading logs from init containers, and inspecting resource quotas. With Docker, you run docker logs and see the error.

Minimal K3s Setup

Install K3s on your first node (this becomes the control plane):

# On node 1 (control plane)
curl -sfL https://get.k3s.io | sh -

# Verify it is running
sudo kubectl get nodes

Join additional worker nodes:

# Get the token from the control plane node
sudo cat /var/lib/rancher/k3s/server/node-token

# On each worker node, replace TOKEN and CONTROL_PLANE_IP
curl -sfL https://get.k3s.io | K3S_URL=https://CONTROL_PLANE_IP:6443 K3S_TOKEN=TOKEN sh -

Deploy a basic application:

# jellyfin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  namespace: media
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jellyfin
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      containers:
        - name: jellyfin
          image: jellyfin/jellyfin:latest
          ports:
            - containerPort: 8096
          volumeMounts:
            - name: config
              mountPath: /config
            - name: media
              mountPath: /media
      volumes:
        - name: config
          hostPath:
            path: /opt/jellyfin/config
        - name: media
          hostPath:
            path: /mnt/media
---
apiVersion: v1
kind: Service
metadata:
  name: jellyfin
  namespace: media
spec:
  selector:
    app: jellyfin
  ports:
    - port: 8096
      targetPort: 8096
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jellyfin
  namespace: media
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  rules:
    - host: jellyfin.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: jellyfin
                port:
                  number: 8096
  tls:
    - hosts:
        - jellyfin.example.com
      secretName: jellyfin-tls
kubectl create namespace media
kubectl apply -f jellyfin.yaml

That is 70 lines of YAML for one service. Kubernetes is verbose. Helm charts and Kustomize overlays help manage this, but you are still operating in a fundamentally more complex environment than Docker Compose.

Docker Swarm: The Built-In Option

Docker Swarm mode has been built into the Docker Engine since 2016. You do not install anything extra. You run docker swarm init and your single Docker host becomes a one-node Swarm cluster. Add more nodes with docker swarm join and you have a multi-node cluster that understands Docker Compose files natively.

What Makes Swarm Shine in a Homelab

Zero learning curve if you know Docker Compose. Your existing docker-compose.yml files work with Swarm using docker stack deploy. You add a few keys like deploy.replicas and deploy.placement, and your single-host Compose files become multi-node orchestrated deployments.

Minimal resource overhead. Swarm’s control plane is part of the Docker daemon. It adds almost nothing to memory consumption. On a Raspberry Pi cluster where every megabyte of RAM counts, this matters.

Simple networking. Swarm creates an overlay network that spans all nodes. Containers on different machines can talk to each other by service name, and Swarm handles the routing mesh so that any node can accept traffic for any service.

What Makes Swarm Painful in a Homelab

Docker has effectively abandoned it. Docker Inc stopped active development on Swarm years ago. It still works, and it receives security patches, but there are no new features. The ecosystem has stagnated. There is no equivalent to Helm charts. Community tooling is limited to Portainer and a handful of unmaintained projects.

No auto-scaling. Swarm does not scale replicas based on CPU or memory usage. You set a fixed replica count and that is what you get.

Limited deployment strategies. Rolling updates work, but there is no canary deployment, no blue/green deployment, and no traffic splitting. If a rolling update goes wrong, you roll back manually.

Secret rotation requires redeployment. You cannot update a secret on a running service. You have to remove the old secret, create a new one, and redeploy.

Minimal Docker Swarm Setup

Initialize the swarm on your first node:

# On node 1 (manager)
docker swarm init --advertise-addr 192.168.1.10

This outputs a join command. Run it on your other nodes:

# On each worker node
docker swarm join --token SWMTKN-1-xxxxx 192.168.1.10:2377

For high availability, promote additional managers:

# Get the manager join token
docker swarm join-token manager

# On node 2 and node 3, run the manager join command
docker swarm join --token SWMTKN-1-xxxxx-manager-token 192.168.1.10:2377

Deploy a service using a Compose file:

# docker-compose.yml (Swarm stack)
version: "3.8"

services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    ports:
      - "8096:8096"
    volumes:
      - jellyfin_config:/config
      - /mnt/media:/media:ro
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.media == true
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 512M
    networks:
      - homelab

  whoami:
    image: traefik/whoami
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
        order: start-first
    networks:
      - homelab

networks:
  homelab:
    driver: overlay

volumes:
  jellyfin_config:
    driver: local
# Deploy the stack
docker stack deploy -c docker-compose.yml homelab

# Check running services
docker service ls

# View logs
docker service logs homelab_jellyfin

# Scale a service
docker service scale homelab_whoami=5

That is it. If you already know Docker Compose, you just learned 80% of Docker Swarm. The simplicity is genuinely appealing.

Nomad: The Lightweight Scheduler

HashiCorp Nomad is a workload orchestrator that takes a different approach from both Kubernetes and Swarm. Where Kubernetes tries to be an entire platform and Swarm tries to be invisible, Nomad focuses on being a great scheduler and lets you plug in other tools for everything else.

Nomad uses HCL (HashiCorp Configuration Language) for job definitions instead of YAML. It can orchestrate Docker containers, but also raw binaries, Java applications, QEMU virtual machines, and anything else you can run on Linux. This flexibility is unusual and genuinely useful in a homelab where not everything runs in a container.

What Makes Nomad Shine in a Homelab

Single binary, low overhead. The Nomad agent is one binary. It uses about 100 MB of RAM for the server and less for clients. There is no etcd, no CoreDNS, no kube-proxy. The operational surface area is tiny.

Mixed workloads. If you want to schedule a Docker container alongside a raw Go binary and a QEMU VM, Nomad handles all three. Kubernetes can only do this with significant extensions.

Operational simplicity. Upgrades are straightforward: replace the binary and restart. Cluster management is minimal. The built-in web UI shows job status, allocations, and logs without installing anything extra.

Excellent deployment strategies. Nomad supports canary deployments, blue/green deployments, and rolling updates with automatic rollback out of the box. For a homelab, rolling updates with health checks are the most useful, and Nomad handles them well.

What Makes Nomad Painful in a Homelab

Smaller ecosystem. There is no equivalent to Helm’s thousands of charts. You write your own job files. Nomad Pack exists but has a fraction of the packages available in Helm. You will be writing more configuration from scratch.

Service discovery requires Consul (or workarounds). Nomad’s built-in service discovery (added in v1.3) handles basic use cases, but for advanced service mesh features you need HashiCorp Consul running alongside Nomad. That is another binary to manage, another cluster to maintain, and another thing to debug when it breaks.

Licensing concerns. HashiCorp moved Nomad to the Business Source License (BSL 1.1) in 2023. For personal and small business use, this changes nothing. You can run it freely. But if you are building a competing product or hosting it as a service, the license restricts that. OpenTofu forked Terraform over this same license change, and some in the community are wary. For homelab use, the BSL license is not a practical issue.

Less community content. Kubernetes has thousands of blog posts, YouTube tutorials, and Stack Overflow answers for every problem. Nomad has good official documentation, but community content is thinner. When you hit an edge case, you may be reading source code or GitHub issues instead of a tutorial.

Minimal Nomad Setup

Install Nomad on all nodes:

# Add HashiCorp repository (Ubuntu/Debian)
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install nomad

Create a server configuration on your first node:

# /etc/nomad.d/server.hcl
datacenter = "homelab"
data_dir   = "/opt/nomad/data"

server {
  enabled          = true
  bootstrap_expect = 1  # Set to 3 for HA with 3 servers
}

client {
  enabled = true  # This node is both server and client

  host_volume "media" {
    path      = "/mnt/media"
    read_only = true
  }

  host_volume "jellyfin-config" {
    path      = "/opt/jellyfin/config"
    read_only = false
  }
}

plugin "docker" {
  config {
    volumes {
      enabled = true
    }
  }
}

Start Nomad:

sudo systemctl enable nomad
sudo systemctl start nomad

# Check cluster status
nomad server members
nomad node status

For worker nodes, use a client-only configuration:

# /etc/nomad.d/client.hcl
datacenter = "homelab"
data_dir   = "/opt/nomad/data"

client {
  enabled = true
  servers = ["192.168.1.10:4647"]
}

plugin "docker" {
  config {
    volumes {
      enabled = true
    }
  }
}

Deploy a job:

# jellyfin.nomad.hcl
job "jellyfin" {
  datacenters = ["homelab"]
  type        = "service"

  group "media" {
    count = 1

    volume "config" {
      type   = "host"
      source = "jellyfin-config"
    }

    volume "media" {
      type      = "host"
      source    = "media"
      read_only = true
    }

    network {
      port "http" {
        static = 8096
      }
    }

    service {
      name = "jellyfin"
      port = "http"

      check {
        type     = "http"
        path     = "/health"
        interval = "30s"
        timeout  = "5s"
      }
    }

    task "jellyfin" {
      driver = "docker"

      config {
        image = "jellyfin/jellyfin:latest"
        ports = ["http"]
      }

      volume_mount {
        volume      = "config"
        destination = "/config"
      }

      volume_mount {
        volume      = "media"
        destination = "/media"
      }

      resources {
        cpu    = 2000  # 2 GHz
        memory = 2048  # 2 GB
      }
    }
  }

  update {
    max_parallel     = 1
    health_check     = "checks"
    min_healthy_time = "30s"
    healthy_deadline = "5m"
    auto_revert      = true
  }
}
nomad job run jellyfin.nomad.hcl

# Check job status
nomad job status jellyfin

# View logs
nomad alloc logs <alloc-id>

The HCL syntax is more verbose than a Docker Compose file but less verbose than Kubernetes YAML. The update stanza with auto_revert = true is a standout feature: if a deployment fails health checks, Nomad automatically rolls back to the previous version.

Learning Curve Comparison

This is where the three options diverge most dramatically for homelab users.

Docker Swarm: Days

If you can write a Docker Compose file, you can use Swarm. The additional concepts are: managers vs workers, overlay networks, stack deploy vs compose up, and the deploy key in Compose files. You can learn all of this in a weekend afternoon.

Core commands to learn: docker swarm init, docker swarm join, docker stack deploy, docker service ls, docker service logs, docker service scale.

Nomad: Weeks

Nomad introduces new concepts: jobs, task groups, tasks, allocations, evaluations, deployments, and the HCL configuration language. None of these are difficult individually, but together they form a new mental model. If you have used Terraform, HCL will feel familiar. If not, it takes a few days to get comfortable.

The official Learn Nomad tutorials on the HashiCorp website are excellent and will get you from zero to running in a few hours.

Kubernetes: Weeks to Months

Kubernetes has the steepest learning curve of any infrastructure tool in common use. The core concepts (Pods, Deployments, Services, Ingress, Namespaces, ConfigMaps, Secrets, PVCs) take a week or two to understand. Then you need to learn about Helm for package management, Ingress controllers for external access, storage classes for persistent data, RBAC for security, and the debugging workflow of kubectl describe, kubectl logs, and kubectl get events.

The payoff is real: once you understand Kubernetes, you understand the system that runs most of the world’s cloud infrastructure. But the honest investment for a homelab is 2-4 weekends of focused learning before you feel productive.

Learning Curve Summary

MilestoneDocker SwarmNomadKubernetes (K3s)
First service deployed30 minutes2 hours4 hours
Comfortable daily use1-2 days1-2 weeks3-4 weeks
Confident troubleshooting1 week2-3 weeks2-3 months
Advanced features mastered2 weeks1-2 months6+ months

Resource Overhead: What Each Platform Costs You

Homelab hardware is finite. Every megabyte of RAM the orchestrator uses is a megabyte you cannot use for Jellyfin, Nextcloud, or your game server. Here is what each platform actually consumes.

Docker Swarm

Swarm’s control plane is part of the Docker daemon. On a manager node, expect Docker to use about 100-150 MB of RAM total (daemon + Swarm management). On worker nodes, the overhead is negligible because they are just running the standard Docker daemon.

Nomad

The Nomad server process uses about 100-150 MB of RAM in a small cluster. Client agents use about 50-70 MB. If you run Consul alongside it for service discovery, add another 100-150 MB for Consul. Total for a combined server+client+consul node: roughly 300-350 MB.

Kubernetes (K3s)

K3s is significantly lighter than upstream Kubernetes. The K3s server process uses about 512 MB of RAM on a control plane node. Worker nodes running the K3s agent use about 200-300 MB. On top of that, system components like CoreDNS, Traefik (included by default), and the local-path-provisioner add another 100-200 MB. Total for a control plane node: roughly 700 MB-1 GB.

Full upstream Kubernetes with etcd, kube-apiserver, kube-controller-manager, kube-scheduler, CoreDNS, and kube-proxy can easily consume 2-3 GB of RAM on control plane nodes. Do not run upstream K8s in a homelab. Use K3s or Talos.

Resource Overhead Summary

PlatformControl Plane RAMWorker Node RAMDisk Usage
Docker Swarm~150 MB~50 MB~0 (part of Docker)
Nomad (+ Consul)~300 MB~120 MB~200 MB
K3s~700 MB~250 MB~500 MB
Upstream K8s~2.5 GB~500 MB~2 GB

For a cluster of Raspberry Pi 4 boards with 4 GB of RAM each, Swarm leaves the most room for actual workloads. For mini PCs with 16-32 GB of RAM, the overhead difference is irrelevant and you should choose based on features.

High Availability and Failover

Docker Swarm

Swarm uses the Raft consensus protocol for manager nodes. You need an odd number of managers (3 or 5) for high availability. If a manager goes down, the remaining managers elect a new leader. If a worker goes down, the manager reschedules its containers on healthy workers.

Failover is automatic but not instant. It takes about 5-15 seconds for Swarm to detect a failed node and another 10-30 seconds to reschedule containers. Total recovery time is typically under a minute.

Nomad

Nomad also uses Raft for server consensus. Three server nodes provide high availability. Client failures are detected via heartbeats (default: 30 second TTL). When a client is marked as down, Nomad reschedules its allocations to healthy clients.

Nomad’s failover is comparable to Swarm’s in speed but offers more control. You can tune heartbeat intervals, set different rescheduling policies per job, and define affinities and constraints for placement.

Kubernetes (K3s)

K3s uses embedded etcd for multi-node HA (or an external database). Three control plane nodes provide high availability. Pod rescheduling after a node failure depends on the node-monitor-grace-period (default: 40 seconds) and pod-eviction-timeout (default: 5 minutes). This means Kubernetes is actually the slowest to recover from node failures in its default configuration.

You can tune these timers down significantly for homelab use. Setting node-monitor-grace-period to 20 seconds and pod-eviction-timeout to 30 seconds gives you recovery times comparable to Swarm and Nomad. K3s makes these adjustments easier than upstream Kubernetes.

Networking and Service Discovery

Docker Swarm

Swarm creates overlay networks that span all nodes using VXLAN. Services are reachable by name on the overlay network. The routing mesh ensures that any node in the swarm can accept traffic on a published port and route it to the correct container, even if the container runs on a different node.

This works well for simple setups but has known performance overhead. VXLAN encapsulation adds latency and reduces throughput by 10-20% compared to host networking. For homelab traffic volumes, this is not noticeable.

Nomad

Nomad provides basic service discovery through its built-in service catalog (since v1.3). Services register themselves and can be queried by other tasks in the same Nomad cluster. For more advanced use cases like service mesh, traffic routing, and health-check-aware load balancing, you need Consul.

Nomad does not create overlay networks. Containers get ports mapped on the host, and service discovery provides the host IP and port. This is simpler and has no encapsulation overhead, but it means you need to think about port conflicts.

Kubernetes

Kubernetes assigns each pod its own IP address from a cluster CIDR range. Pods can communicate directly with each other without NAT. Services provide stable IP addresses and DNS names that load-balance across pods. This model is the most sophisticated and the most complex.

K3s uses Flannel as the default CNI (Container Network Interface) plugin, which uses VXLAN similar to Swarm. You can replace it with Cilium for eBPF-based networking that provides better performance and observability, though this adds complexity.

Storage and Persistent Data

This is the achilles heel of all container orchestrators in a homelab. When an orchestrator reschedules a container from node A to node B, the data needs to follow. Each platform handles this differently.

Docker Swarm

Swarm volumes are local by default. If a container moves to a different node, its data does not follow. You can use NFS-backed volumes or a volume plugin like REX-Ray, but the ecosystem for Swarm storage plugins has largely dried up. Most homelab Swarm users pin stateful services to specific nodes using placement constraints.

Nomad

Nomad supports host volumes (local to the node) and CSI (Container Storage Interface) volumes. For homelab use, host volumes with placement constraints are the most common approach. Nomad also supports preemption and migration strategies that give you more control over when and how stateful workloads move.

Kubernetes

Kubernetes has the richest storage ecosystem. PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) abstract storage from workloads. For homelab use, popular options include:

  • Longhorn by SUSE: distributed block storage that replicates data across nodes. Easy to install, works well with K3s.
  • NFS via the NFS CSI driver: simple, widely understood, works with any NAS.
  • Local-path-provisioner (default in K3s): local storage with no replication. Simple but data does not move with pods.
  • Rook-Ceph: enterprise-grade distributed storage. Powerful but resource-hungry and complex. Overkill for most homelabs.

Longhorn is the sweet spot for homelab Kubernetes. It provides data replication across nodes with automatic failover and uses about 500 MB of RAM per node.

Ecosystem and Community

Kubernetes

The Kubernetes ecosystem is vast. As of 2026, the CNCF landscape includes over 1,000 projects that integrate with Kubernetes. Helm has tens of thousands of charts. ArtifactHub lists charts for nearly every self-hosted application. Operators (custom controllers that automate application management) exist for databases, message queues, monitoring stacks, and more.

Community support is unmatched. Every problem you encounter has been encountered and solved by someone else. Reddit’s r/kubernetes and r/homelab, the Kubernetes Slack, and Stack Overflow have active communities.

Docker Swarm

Swarm’s ecosystem peaked around 2018-2019 and has been declining since. Docker Inc refocused on Docker Desktop and Docker Hub, leaving Swarm in maintenance mode. Portainer is the primary third-party tool, providing a web GUI for managing Swarm clusters.

There are no package managers, no operator frameworks, and limited community tooling. You will write your own Compose files for everything. This is not necessarily bad — it keeps things simple — but it means more work when deploying complex applications.

Nomad

Nomad’s ecosystem is smaller than Kubernetes but growing. Nomad Pack provides a package manager with community-contributed packs for common applications. The HashiCorp ecosystem (Consul for service discovery, Vault for secrets, Waypoint for deployments) integrates tightly.

Community activity has increased in 2025-2026 as users look for Kubernetes alternatives that are less complex. The Nomad section of the HashiCorp forums is active, and several YouTube channels now cover Nomad homelab setups.

Minimal Setup: Getting Each One Running

Here is the absolute minimal path from zero to a working cluster for each platform. Each assumes you have three Ubuntu 24.04 machines with Docker installed.

Docker Swarm: 2 Minutes

# Node 1 (manager)
docker swarm init --advertise-addr 192.168.1.10

# Copy the join token from the output, then on Nodes 2 and 3:
docker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377

# Back on Node 1, verify:
docker node ls

Done. Deploy services with docker stack deploy.

Nomad: 15 Minutes

# All nodes: install Nomad
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install nomad

# Node 1: create server config and start
# (see server.hcl example above)
sudo systemctl start nomad

# Nodes 2 and 3: create client config pointing to Node 1 and start
sudo systemctl start nomad

# Verify from any node
nomad server members
nomad node status

Access the Nomad UI at http://192.168.1.10:4646. Deploy jobs with nomad job run.

Kubernetes (K3s): 10 Minutes

# Node 1 (control plane)
curl -sfL https://get.k3s.io | sh -

# Get the join token
sudo cat /var/lib/rancher/k3s/server/node-token

# Nodes 2 and 3 (workers)
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 K3S_TOKEN=xxx sh -

# Back on Node 1, verify:
sudo kubectl get nodes

K3s includes Traefik as an ingress controller and local-path-provisioner for storage out of the box. Deploy applications with kubectl apply.

When to Pick Each One

Pick Docker Swarm When

  • You have 2-5 nodes and fewer than 15 services.
  • You want the fastest possible setup with the least learning.
  • Your services are all Docker containers with simple networking needs.
  • You are comfortable with a platform that will not get new features.
  • You want to reuse your existing Docker Compose files with minimal changes.
  • You are running resource-constrained hardware like Raspberry Pis.

Pick Nomad When

  • You want more features than Swarm without the complexity of Kubernetes.
  • You need to run non-container workloads (raw binaries, VMs).
  • You value operational simplicity and easy upgrades.
  • You want canary deployments and automatic rollbacks.
  • You are already in the HashiCorp ecosystem (Terraform, Vault, Consul).
  • You have 3-10 nodes and a moderate number of services.

Pick Kubernetes (K3s) When

  • You want to learn the industry standard for career development.
  • You plan to run 20+ services and want access to Helm charts.
  • You want GitOps with Flux or ArgoCD.
  • You want the richest ecosystem for monitoring, storage, and networking.
  • You have at least 8 GB of RAM per node (16 GB preferred).
  • You are willing to invest weeks of learning upfront for long-term payoff.

FAQ

Can I run Kubernetes on a Raspberry Pi?

Yes, K3s runs well on Raspberry Pi 4 (4 GB or 8 GB models). Pi 3 boards are too limited. Expect to run 5-10 lightweight services on a 3-node Pi cluster. For media-heavy workloads or anything that needs transcoding, use x86 hardware.

Is Docker Swarm dead?

Not dead, but on life support. It still works, it still receives security updates, and it still solves real problems. But Docker Inc is not adding features, the community has largely moved to Kubernetes, and new tools rarely support Swarm. If you start a new homelab in 2026, Swarm is still a valid choice for simple setups, but you should be aware that the ecosystem will continue to shrink.

Does Nomad require Consul and Vault?

No. Nomad’s built-in service discovery (since v1.3) handles basic service registration and discovery without Consul. Nomad also has basic secret management through its Variables feature. You only need Consul if you want a service mesh or advanced health checking, and Vault if you need dynamic secrets or complex secret management workflows.

Can I migrate from one orchestrator to another?

Yes, but it is manual work. Your application containers are the same regardless of orchestrator. What changes is the configuration format and the deployment tooling. Moving from Swarm to K3s means rewriting your Compose files as Kubernetes manifests. Moving from Nomad to K3s means rewriting HCL job files as YAML manifests. The applications themselves do not change.

What about Podman and its pod management?

Podman is a container runtime, not an orchestrator. It can run pods (groups of containers) on a single host, but it does not schedule across multiple nodes, handle failover, or provide service discovery. Podman is an excellent Docker replacement for single-host setups, but it does not compete in the orchestration space.

Which one is best for a single-node homelab?

None of them. If you have a single machine, use Docker Compose. Container orchestration solves multi-node problems. Adding an orchestrator to a single node adds complexity without benefit. Wait until you have at least two machines before considering orchestration.

How do I handle persistent storage across nodes?

All three orchestrators struggle with this. The simplest solution for homelabs is NFS: run an NFS server on your NAS or one dedicated machine, and mount NFS volumes on each node. For Kubernetes specifically, Longhorn provides distributed block storage that replicates data across nodes automatically. For Swarm and Nomad, NFS or pinning stateful services to specific nodes are the most common approaches.

Can I run all three side by side to test them?

Yes. Swarm uses ports 2377, 7946, and 4789. Nomad uses 4646-4648. K3s uses 6443 and a range of high ports. They do not conflict. You can run all three on the same set of machines for testing, though you should only use one in production to avoid confusion about which system manages which container.

Conclusion

There is no universally correct choice. The right orchestrator depends on your goals, your hardware, and how much time you want to spend learning infrastructure.

Docker Swarm gets you multi-node container orchestration in two minutes with tools you already know. It is the pragmatic choice for small setups where simplicity is the priority. Its declining ecosystem is a real concern for long-term use, but for a homelab that runs a dozen services, it may be all you ever need.

Nomad occupies a genuinely useful middle ground. It is more capable than Swarm, simpler than Kubernetes, and uniquely flexible in what it can orchestrate. If you want professional-grade deployment strategies without the Kubernetes learning curve, Nomad is worth your time. The HashiCorp ecosystem integration is a bonus if you already use Terraform or Vault.

Kubernetes (K3s) is the long-term bet. The ecosystem is unmatched, the skills transfer directly to professional environments, and the tooling keeps improving. K3s makes it feasible on homelab hardware, and Helm charts make it practical for deploying applications. The learning investment is significant but the payoff is real.

If you are starting your first homelab cluster in 2026 and are not sure which to pick: start with Docker Swarm to understand what orchestration does for you. If you outgrow it or want more features, move to K3s. If Kubernetes feels like too much and you want a capable middle option, try Nomad.

The containers are the same regardless. The images are the same. The applications are the same. Only the management layer changes. Pick one, deploy your services, and iterate.