Kubernetes in Docker (KinD): Setting Up a k8s Cluster in Under a Minute
Looking back on my early Kubernetes days, I still recall the suffering of creating Kubernetes clusters ‘the hard way’ — spending almost a week just to have a proof of concept running. Every step from configuring worker nodes to setting up the control-plane components was manual, error-prone and time-consuming, including generating and distributing TLS certificates and ensuring high-availability etcd configuration. When I had to test a configuration change or replicate a problem, the actual annoyance came from having to go through those same tiresome steps all over again. This made fast iteration on both Kubernetes components and application development almost impossible.
Although managed services such as EKS and GKE clearly addressed the operational overhead issue, and we could create a cluster in minutes instead of days, we were now limited to cloud infrastructure for even basic testing purposes. More importantly, the control plane became a complete black box — great for production workloads, but troublesome when you need to know how Kubernetes behaves under particular conditions or when debugging difficult cluster-level problems. This is precisely where Kubernetes in Docker (KinD) becomes invaluable for practitioners like us.
KinD bridges this gap by allowing you to run complete, multi-node Kubernetes clusters locally using Docker containers as nodes. What makes it particularly appealing from an SRE perspective is that you get full visibility into all cluster components while maintaining the speed and reproducibility that managed services offer. Rather than having to wait for cloud resources to provision or deal with the complexity of virtual machine orchestration, you can spin up a realistic Kubernetes environment in under a minute on your laptop. This means you can quickly test configuration changes, reproduce production issues in isolation, validate deployment strategies and even experiment with cluster failure scenarios — all without touching your production infrastructure or racking up cloud costs.
Let’s explore how to use KinD for various scenarios, from basic single-node setups to more complex configurations.
Setting up a Single-Node Cluster
A single-node cluster is the simplest KinD configuration — perfect for quick development and testing. In this setup, a single container acts as both control plane and worker node.
Prerequisites
Before getting started, you’ll need:
- Docker installed and running on your system
- KinD installed
- brew install kind ( macos )
- [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-$(uname)-amd64 ( linux )
- kubectl installed to interact with your cluster
- Install for x86-64
- curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”
Creating a Basic Single-Node Cluster in Under a Minute!
The simplest way to create a KinD cluster is with a single command:
# time kind create cluster
Creating cluster “kind” …
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to “kind-kind”
You can now use your cluster with:
kubectl cluster-info –context kind-kind
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
real 0m19.472s
user 0m0.887s
sys 0m0.783s
This command performs several operations:
- Creates a Docker network for the cluster
- Launches a container with the Kubernetes control-plane components
- Configures kubectl to communicate with the new cluster
- Sets up cluster networking and CoreDNS
To verify if your cluster is running correctly:
# List your KinD clusters:
# kind get clusters
kind
# Set kubectl to use your new cluster:
# Set context
kubectl config set-context kind-kind
# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:45071
CoreDNS is running at https://127.0.0.1:45071/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# View the single node in your cluster:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 4h38m v1.32.2
Creating a Customized Single-Node Cluster
For more control, you can define your cluster using a YAML configuration file:
# single-node-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: custom-single-node
nodes:
– role: control-plane
# Add custom port mappings to access services easily
extraPortMappings:
– containerPort: 30000
hostPort: 8080
Create this cluster with:
kind create cluster –config single-node-config.yaml
The configuration approach gives you much more flexibility, allowing you to specify:
- Custom node configurations
- Port mappings between host and containers
- Volume mounts for persistent data
- Container resource limits
Creating a Multi-Node Cluster
While a single-node cluster works well for basic testing, multi-node clusters better simulate real-world environments. They allow you to test scenarios involving node failures, workload distribution and control-plane high availability.
Basic Multi-Node Configuration
Here’s a simple multi-node configuration with one control plane and two workers:
# multi-node-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: multi-node
nodes:
– role: control-plane
– role: worker
– role: worker
Create this cluster with:
# time kind create cluster –config multi-node-config.yaml
Creating cluster “multi-node” …
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to “kind-multi-node”
You can now use your cluster with:
kubectl cluster-info –context kind-multi-node
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
real 0m31.976s
user 0m1.582s
sys 0m1.554s
After creation, you’ll see three nodes when running kubectl get nodes:
- One node labeled with the control-plane role
- Two nodes labeled as workers
This separation allows you to:
- Test node affinity and anti-affinity rules
- Simulate node failures by deleting worker containers
- Observe how Kubernetes reschedules pods when nodes are unavailable
Networking Between Nodes
KinD automatically configures networking between nodes, simulating a real cluster environment. You can test service discovery and pod-to-pod communication across nodes just as you would in a production environment.
Building Custom Node Images
One of KinD’s powerful features is the ability to use custom node images. This allows you to test with specific Kubernetes versions or include additional tools inside your nodes.
Understanding KinD Node Images
KinD nodes are Docker images that contain:
- A Linux distribution (typically Ubuntu)
- Containerized systemd for running system services
- Kubernetes components (kubelet, kubeadm, etc.)
- Container runtime (containerd)
The official images are named kindest/node and tagged by Kubernetes version (e.g., kindest/node:v1.27.1).
Building a Custom Node Image
To build a custom node image, you’ll need to:
- Clone the KinD repository:
git clone https://github.com/kubernetes-sigs/kind
cd kind
Build the KinD tool if needed:
go build
Build a node image for a specific Kubernetes version:
# Build an image for Kubernetes v1.26.0
# kind build node-image v1.26.0
Detected build type: “release”
Building using release “v1.26.0” artifacts
Starting to build Kubernetes
Downloading “https://dl.k8s.io/v1.26.0/kubernetes-server-linux-amd64.tar.gz”
WARNING: Using fallback version detection due to missing version file (This command works best with Kubernetes v1.31+)
Finished building Kubernetes
Building node image …
Building in container: kind-build-1746315440-1964057063
registry.k8s.io/kube scheduler amd64:v1. saved
application/vnd.docker.distribution.manifest.v2+json sha256:2b6c03ce8078e35779e7901530c88689ec11765deb76f4605e5947a14c9be10b
Importing elapsed: 4.3 s total: 0.0 B (0.0 B/s)
registry.k8s.io/kube proxy amd64:v1.26.0 saved
application/vnd.docker.distribution.manifest.v2+json sha256:548c5fa0b925b6c96eb67f51321956c0a1b924cde47e4733837b0c0072c4894a
Importing elapsed: 5.0 s total: 0.0 B (0.0 B/s)
registry.k8s.io/kube controller manager saved
application/vnd.docker.distribution.manifest.v2+json sha256:8d7f2b0c25f2f1ea956455ee86ff49f213d8c4adebc5b8d85816147dce1f3e79
Importing elapsed: 7.4 s total: 0.0 B (0.0 B/s)
registry.k8s.io/kube apiserver amd64:v1. saved
application/vnd.docker.distribution.manifest.v2+json sha256:02610b70a7258f575f6ce0c48aca67dc723878f5956936b4d43ddc7a70203ed2
Importing elapsed: 7.5 s total: 0.0 B (0.0 B/s)
Image “kindest/node:v1.26.0“ build completed.
The build process:
- Downloads Kubernetes binaries for the specified version
- Creates a base container image
- Installs necessary components
- Configures systemd and containerd
- Tags the resulting image as kindest/node:v1.26.0
Using Custom Images
Once you’ve built your custom image, specify it in your cluster configuration:
# custom-image-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
– role: control-plane
image: kindest/node:v1.26.0
– role: worker
image: kindest/node:v1.26.0
This approach allows you to:
- Test applications against specific Kubernetes versions
- Include custom tools or configurations in your node images
- Validate upgrade paths by using different versions for different nodes
Exporting and Analyzing Logs
Log access is crucial when testing Kubernetes applications or troubleshooting issues. KinD provides simple commands, allowing you to extract logs from all components in your cluster.
Exporting All Cluster Logs
To export all logs from a KinD cluster:
# Specify a particular cluster if you have multiple
kind export logs ./my-cluster-logs –name multi-node
This command collects:
- Container logs for all Kubernetes components
- Systemd journals from all nodes
- kubelet logs
- Container runtime logs
- Control-plane component logs (API server, scheduler, etc.)
Understanding the Log Structure
The exported logs are organized in a directory structure:
my-cluster-logs/
├── docker-info.txt # Docker system information
├── kind-control-plane/ # Logs from control-plane node
│ ├── containers/ # Individual container logs
│ ├── journal.log # systemd journal
│ └── kubernetes-components/ # K8s component logs
└── kind-worker/ # Logs from worker nodes
├── containers/
├── journal.log
└── kubernetes-components/
This structured approach makes it easy to:
- Trace issues across components
- Debug node-specific issues
Conclusion
Since KinD is lightweight, it is perfect for a number of crucial situations that go well beyond simple local development. When you look at how top companies have incorporated KinD into their production processes, you can see its true potential.
Integration of CI/CD Pipelines at Scale
Linkerd’s presentation at KubeCon EU 2020, where the platform’s migration from a single, persistent GKE cluster running on Travis CI to a system using eight parallel KinD clusters via GitHub Actions was described, may have the most convincing real-world example. Linkerd’s new KinD-based system operates in less than 10 minutes with full parallelization, whereas its old CI system required roughly 45 minutes for serialized tests, followed by multi-hour backups. This change illustrates how KinD can address the time and scalability issues that many SRE teams encounter.
Similar practices have been adopted by numerous organizations; for example, Codefresh has documented how its ephemeral Kubernetes clusters for CI testing were built. These clusters spin up in about two minutes and get deleted almost immediately.
Testing for Multiple Versions of Kubernetes
Testing apps against several Kubernetes versions at once is one particularly potent use case that has surfaced. Teams can simply specify various Kubernetes versions using commands like kind create cluster –image “kindest/node:v1.16.4” to test compatibility as part of their CI pipeline, as explained in Eficode’s extensive guide. This solves a problem that many of us encounter when production clusters are upgraded and applications unexpectedly stop working because of behavioral changes or deprecated API versions.
