Kubernetes Multi-Master Cluster On Ubuntu: A Step-by-Step Guide

by Team 64 views
Kubernetes Multi-Master Cluster Setup on Ubuntu: A Step-by-Step Guide

Let's dive into the process of setting up a Kubernetes multi-master cluster on Ubuntu. This setup is crucial for ensuring high availability and fault tolerance for your containerized applications. We'll walk through each step, making it easy to follow along and get your cluster up and running.

Prerequisites

Before we get started, make sure you have the following:

  • Ubuntu Servers: You'll need at least three Ubuntu servers. One will be the primary master, one will be the secondary master, and one or more will serve as worker nodes. Ensure these servers can communicate with each other over the network.
  • Root or Sudo Access: You need root or sudo privileges on all the servers to install and configure the necessary components.
  • Internet Connection: An active internet connection is required to download packages and dependencies.
  • Basic Linux Knowledge: Familiarity with basic Linux commands and concepts will be helpful.

Step 1: Install Container Runtime (Docker)

First, we need to install a container runtime on all nodes. Docker is a popular choice, and we'll use it for this guide. Let's get Docker installed on all your servers. Here's how:

Update Package Index

Start by updating the package index:

sudo apt update

Install Required Packages

Next, install packages that allow apt to use a repository over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Add Docker's GPG Key

Add Docker's official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Set Up the Stable Docker Repository

Set up the stable repository:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

Now, install Docker Engine, containerd, and Docker Compose:

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Verify Docker Installation

Verify that Docker is installed correctly by running the hello-world image:

sudo docker run hello-world

If everything is set up correctly, you should see a message confirming the installation. Make sure you perform these steps on all your servers โ€“ the primary master, secondary master, and worker nodes.

Step 2: Install kubeadm, kubelet, and kubectl

Now, let's install the Kubernetes tools: kubeadm, kubelet, and kubectl. These tools are essential for managing your Kubernetes cluster. Again, perform these steps on all nodes.

Add Kubernetes APT Repository

Add the Kubernetes APT repository:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install Kubernetes Tools

Install kubeadm, kubelet, and kubectl:

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

The apt-mark hold command prevents these packages from being updated automatically, which can cause compatibility issues. This is a good practice to ensure your cluster remains stable.

Configure cgroup driver

Ensure that the cgroup driver used by the kubelet is the same as the one used by Docker. The default cgroup driver for Docker is cgroupfs, while Kubernetes recommends using systemd. To change Docker's cgroup driver, edit the /etc/docker/daemon.json file. If the file doesn't exist, create it:

sudo nano /etc/docker/daemon.json

Add the following configuration:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

Save the file and restart Docker:

sudo systemctl restart docker

Verify the cgroup driver:

docker info | grep -i cgroup

You should see Cgroup Driver: systemd in the output. Restart the kubelet service to apply the changes:

sudo systemctl restart kubelet

Step 3: Initialize the Primary Master Node

Now, let's initialize the Kubernetes cluster on the primary master node. This is where the control plane will reside.

Initialize the Cluster

Use kubeadm to initialize the cluster. Replace <advertise-address> with the IP address of your primary master node:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint="<advertise-address>:6443"

The --pod-network-cidr specifies the IP address range for pods in the cluster. The --control-plane-endpoint flag specifies the address that other master nodes will use to connect to the control plane. Make sure you note down the kubeadm join command that is outputted at the end of the initialization process. You'll need this command to join the worker nodes and the secondary master node to the cluster.

Configure kubectl

To use kubectl, you need to configure it to connect to your cluster. Run the following commands:

mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a Pod Network Add-on

A pod network add-on is required to enable communication between pods. We'll use Calico, a popular choice. Apply the Calico manifest:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Wait a few minutes for the pods to start. You can check their status with:

kubectl get pods -n kube-system

Step 4: Join the Secondary Master Node

To set up a highly available control plane, join the secondary master node to the cluster. Use the kubeadm join command that was outputted when you initialized the primary master node, but with a slight modification. You need to specify the --control-plane flag to indicate that this node will also be a control plane node.

Join the Secondary Master

sudo kubeadm join <control-plane-endpoint>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --control-plane --certificate-key <certificate-key>

Replace <control-plane-endpoint>, <token>, <hash>, and <certificate-key> with the values from the kubeadm init output. The --certificate-key is used to encrypt control plane communication between master nodes. If you don't have the certificate key, you can generate one on the primary master node:

sudo kubeadm certs certificate-key

Copy the generated key to the secondary master node and use it in the kubeadm join command.

Approve the Secondary Master Node

On the primary master node, check the status of the nodes:

kubectl get nodes

You should see the secondary master node listed as NotReady. This is because the kubelet hasn't been approved yet. Approve the kubelet by running:

kubectl get csr

Find the CSR for the secondary master node and approve it:

kubectl certificate approve <csr-name>

After a few minutes, the secondary master node should be in the Ready state.

Configure kubectl on the Secondary Master

To use kubectl on the secondary master node, copy the admin.conf file from the primary master node:

scp <primary-master-ip>:/etc/kubernetes/admin.conf .
mkdir -p $HOME/.kube
cp admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Step 5: Join the Worker Nodes

Now, let's add the worker nodes to the cluster. Use the kubeadm join command that you noted down earlier. This command is the same for all worker nodes.

Join the Worker Nodes

sudo kubeadm join <control-plane-endpoint>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Replace <control-plane-endpoint>, <token>, and <hash> with the values from the kubeadm init output.

Approve the Worker Nodes

On the primary master node, check the status of the nodes:

kubectl get nodes

You should see the worker nodes listed as NotReady. Approve the kubelet by running:

kubectl get csr

Find the CSR for each worker node and approve them:

kubectl certificate approve <csr-name>

After a few minutes, the worker nodes should be in the Ready state.

Step 6: Verify the Cluster

To verify that your cluster is set up correctly, run the following command on the primary master node:

kubectl get nodes

You should see all the nodes (primary master, secondary master, and worker nodes) listed in the output, with their status as Ready.

Deploy a Test Application

Let's deploy a simple Nginx application to test the cluster. Create a deployment and a service:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort

Check the status of the deployment:

kubectl get deployments
kubectl get services

Find the NodePort for the Nginx service and access it from your browser using the IP address of one of the worker nodes. If everything is set up correctly, you should see the Nginx welcome page.

Conclusion

Setting up a Kubernetes multi-master cluster on Ubuntu might seem complex, but by following these steps, you can create a highly available and fault-tolerant environment for your containerized applications. Remember to pay close attention to the details and double-check each step to avoid common pitfalls. Now you're ready to deploy and manage your applications with confidence!

Kubernetes multi-master clusters are a game-changer for ensuring your applications remain available and responsive, even in the face of unexpected outages. By distributing the control plane across multiple master nodes, you eliminate the single point of failure that can bring down your entire cluster. This redundancy is crucial for production environments where downtime is simply not an option. You've now got a setup that can handle the pressure, ensuring your services stay online no matter what. High availability isn't just a nice-to-have; it's a necessity, and you've just taken a major step toward achieving it. This setup is also scalable. As your application grows, you can easily add more worker nodes to the cluster, increasing its capacity to handle more traffic and more complex workloads. Kubernetes makes scaling a breeze, allowing you to adapt to changing demands without missing a beat. With your multi-master cluster, you're well-prepared for the future, ready to handle whatever growth and challenges come your way.

The advantages of using Ubuntu as the base operating system for your Kubernetes cluster are numerous. Ubuntu is known for its stability, security, and ease of use, making it an excellent choice for both beginners and experienced users. The extensive community support and comprehensive documentation mean you're never alone when facing challenges. Plus, Ubuntu's apt package manager simplifies the installation and management of software, saving you time and effort. In addition to Ubuntu's inherent benefits, its compatibility with a wide range of hardware and software makes it a versatile choice for any environment. Whether you're running your cluster on-premises, in the cloud, or in a hybrid setup, Ubuntu can handle it all. This flexibility ensures that your Kubernetes cluster is adaptable and can evolve with your changing needs. The robust security features of Ubuntu, including regular security updates and a strong focus on protecting against vulnerabilities, provide an extra layer of protection for your applications and data. You can rest easy knowing that your cluster is built on a solid foundation of security and reliability.

The role of container runtimes like Docker in a Kubernetes cluster is fundamental to how applications are deployed and managed. Docker provides a standardized way to package applications and their dependencies into containers, ensuring that they run consistently across different environments. This consistency eliminates the