← Back to Projects

Kubernetes Cluster From Scratch

A production-ready Kubernetes cluster deployment with 1 control plane and 2 worker nodes

📅 Duration: 3-4 hours
🔧 Tools: kubeadm, Calico, containerd
💻 Platform: Ubuntu 22.04 LTS

Overview

This project demonstrates the deployment of a production-ready Kubernetes cluster built from scratch using kubeadm. The cluster consists of one control plane node and two worker nodes, configured with Calico CNI for networking, and includes security hardening, monitoring setup, and sample application deployments.

This hands-on implementation showcases deep understanding of Kubernetes architecture, container networking, and cluster administration - essential skills for managing containerized workloads at scale.

Architecture

┌─────────────────────────────────────────────────────────┐
│                    Kubernetes Cluster                    │
│                                                          │
│  ┌────────────────────────────────────────────────┐    │
│  │           Control Plane Node (Master)           │    │
│  │                                                  │    │
│  │  • kube-apiserver                               │    │
│  │  • kube-controller-manager                      │    │
│  │  • kube-scheduler                               │    │
│  │  • etcd (cluster state)                         │    │
│  │  • kubelet                                      │    │
│  │  • kube-proxy                                   │    │
│  │                                                  │    │
│  │  IP: 192.168.1.10                               │    │
│  └────────────────────────────────────────────────┘    │
│                         │                                │
│                         │ Pod Network (Calico CNI)       │
│                         │                                │
│         ┌───────────────┴───────────────┐                │
│         │                               │                │
│  ┌──────▼──────┐              ┌────────▼──────┐         │
│  │  Worker-1   │              │   Worker-2    │         │
│  │             │              │               │         │
│  │  • kubelet  │              │  • kubelet    │         │
│  │  • kube-    │              │  • kube-      │         │
│  │    proxy    │              │    proxy      │         │
│  │  • containerd              │  • containerd │         │
│  │             │              │               │         │
│  │  IP:        │              │  IP:          │         │
│  │  192.168.   │              │  192.168.     │         │
│  │  1.11       │              │  1.12         │         │
│  └─────────────┘              └───────────────┘         │
│                                                          │
└─────────────────────────────────────────────────────────┘
                

Prerequisites

Step-by-Step Implementation

1Initial System Setup (All Nodes)

First, prepare all nodes by disabling swap, configuring kernel modules, and setting up networking parameters.

# Disable swap (Kubernetes requirement) sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # Load required kernel modules cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # Configure sysctl parameters for Kubernetes networking cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system

2Install Container Runtime (containerd)

Install and configure containerd as the container runtime. Kubernetes requires a container runtime to run containers in pods.

# Install containerd sudo apt-get update sudo apt-get install -y containerd # Configure containerd sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml # Enable systemd cgroup driver sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml # Restart containerd sudo systemctl restart containerd sudo systemctl enable containerd

3Install Kubernetes Components

Install kubeadm, kubelet, and kubectl on all nodes. These are the core Kubernetes components needed for cluster operation.

# Add Kubernetes apt repository sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | \ sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | \ sudo tee /etc/apt/sources.list.d/kubernetes.list # Install Kubernetes components sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl # Enable kubelet service sudo systemctl enable kubelet

4Initialize Control Plane

On the control plane node, initialize the Kubernetes cluster using kubeadm. This sets up all the control plane components.

# Initialize the cluster (run on control plane node only) sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<CONTROL_PLANE_IP> # Configure kubectl for the current user mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # Verify control plane components kubectl get nodes kubectl get pods -n kube-system
📝 Note: Save the kubeadm join command output from the init command. You'll need it to join worker nodes to the cluster.

5Install Calico CNI Network Plugin

Deploy Calico as the Container Network Interface (CNI) to enable pod-to-pod networking across the cluster.

# Install Calico CNI (run on control plane) kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml # Wait for Calico pods to be ready kubectl get pods -n calico-system --watch # Verify networking kubectl get nodes # All nodes should show "Ready" status once Calico is running

6Join Worker Nodes

Use the join command from step 4 to add worker nodes to the cluster. Run this on each worker node.

# Run on each worker node (use the actual token from kubeadm init output) sudo kubeadm join <CONTROL_PLANE_IP>:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> # Verify from control plane kubectl get nodes kubectl get pods --all-namespaces
⚠️ Important: If the join token expires (valid for 24 hours), generate a new one on the control plane using: kubeadm token create --print-join-command

7Security Hardening

Implement security best practices to harden the cluster against potential threats.

# Create a dedicated namespace for applications kubectl create namespace production # Set up RBAC - Create a read-only user role cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: production name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] EOF # Enable Pod Security Standards kubectl label namespace production pod-security.kubernetes.io/enforce=baseline # Configure Network Policies (example: deny all ingress by default) cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: production spec: podSelector: {} policyTypes: - Ingress EOF

8Deploy Sample Application

Deploy a sample NGINX application with persistent storage to demonstrate cluster functionality.

# Create a deployment with 3 replicas cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: production spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" --- apiVersion: v1 kind: Service metadata: name: nginx-service namespace: production spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort EOF # Verify deployment kubectl get deployments -n production kubectl get pods -n production kubectl get svc -n production

Verification and Testing

Verify that the cluster is functioning correctly by running several diagnostic commands:

# Check cluster health kubectl get nodes -o wide kubectl get pods --all-namespaces kubectl cluster-info # Test DNS resolution kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default # Verify pod networking kubectl exec -it <nginx-pod-name> -n production -- curl <another-pod-ip> # Check resource usage kubectl top nodes kubectl top pods -n production

Key Achievements

Common Troubleshooting

Issue: Nodes stuck in "NotReady" state

Solution: Check CNI plugin installation and ensure all pods in kube-system namespace are running.

kubectl get pods -n kube-system kubectl describe node <node-name>

Issue: Pods can't communicate across nodes

Solution: Verify Calico installation and check firewall rules.

kubectl get pods -n calico-system calicoctl node status # if calicoctl is installed

Issue: kubeadm join fails

Solution: Ensure ports 6443, 10250, 10251, 10252 are open on the control plane, and token hasn't expired.

Technologies Used

Kubernetes kubeadm Calico CNI containerd Ubuntu Linux RBAC Network Policies kubectl

Conclusion

This project demonstrates comprehensive knowledge of Kubernetes cluster deployment, configuration, and management. Building a cluster from scratch provides deep insights into Kubernetes architecture, networking, security, and troubleshooting - skills essential for managing production Kubernetes environments.

The hands-on experience gained through this implementation translates directly to managing enterprise Kubernetes deployments, whether on-premises or in cloud environments like EKS, AKS, or GKE.

← Back to Projects Get in Touch