Configuring A Multi-Node Kubernetes Cluster On AWS Cloud Using Ansible

Shubham Mehta
10 min readFeb 18, 2021

Configuring a multi-node Kubernetes cluster is among one of the most painful tasks to do. In this article, I am going to show you how to configure a multi-node Kubernetes cluster on AWS cloud using Ansible.

And performing the entire task using ansible is one of the easiest things to do as the entire setup is just one click away from you.

But before configuring anything using Ansible, it is must to know how to set up the desired environment manually. So let us start by manually configuring the Kubernetes (k8s) cluster on AWS. And then, I will show you how to repeat the same process using Ansible.

Also before starting, let’s have a quick overview of the technologies used in this article:

Kubernetes:

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Know More About Kubernetes: https://kubernetes.io/

Ansible:

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems and can configure both Unix-like systems as well as Microsoft Windows.

Know More About Ansible: https://www.ansible.com/

AWS:

Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.

Know More About AWS: https://aws.amazon.com/

So, let’s start the configuration part.

  • The very first step is to launch ec2 instances on AWS cloud on which the cluster is going to be configured. Here, I am going to launch 2 instances, one for the master node and one for the slave node.

I am going to use Amazon Linux 2 AMI to launch instances. The instance type will be t2.medium but you can go for t2.micro as well. t2.medium provides 2 vCPUs and 4 GiB Memory, that make it easier to launch the cluster.

If you go for t2.micro, you might face some conflicts. Later in this article, I will also show you how to resolve those conflicts.

For ease, I have allowed all traffic on the security group of the instances.

And the instances have been launched with the above configurations:

Let’s start configuring the master node first:

Let’s login into the master node. I have used SSH in my Windows Command prompt to login, you can use other tools such as putty as well.

  • Since I am going to use docker as my container engine, so the first step is:

Installing Docker:

The repo for docker is pre-configured on Amazon Linux 2 AMI and to install docker, use the above command:

yum install docker -y

The output of the above command:

Starting the Docker service:

  • In the next step, I am going to configure yum repo for Kubeadm, Kubectl, and Kubelet.

Installing Kubeadm, Kubectl, Kubelet

Use the above command to do so:

yum install -y kubelet kubeadm kubectl - -disableexcludes=kubernetes

Starting the Kubelet Service:

As of now, the status is ‘activating’ but it will start soon, as we proceed towards configuring the cluster.

  • Kubernetes runs multiple programs behind the scene and it launched all those programs on the top of docker containers. And to launch containers of the respective programs, it needs the respective images. And thus, the next step is to pull the required images:

As of now, you can see that we do not have any docker images.

To pull the images:

And the images have been pulled.

  • The next step is to make changes in IPtables settings.

Use the above commands to do so:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

And finally, use the above command to apply the changes:

sysctl — — system

Docker by default uses ‘cgroupfs’ as its Cgroup Driver. But Kubernetes does not have proper support for this driver.

  • So, the next step is to change Cgroup driver from ‘cgroupfs’ to ‘systemd’

This can be done by creating a ‘daemon.json’ file in ‘/etc/docker/’ with the following content:

  • Next step: Restarting Docker Service. Use the above command to do so:

systemctl restart docker

We also need ‘iproute-tc’ package for traffic control, use the above command to install it:

yum install iproute-tc -y

Most Important Step: Initializing the Kubernetes Cluster using kubeadm init command.

Generally, we use the above command to initialize the cluster:

kubeadm init — pod-network-cidr=[Network CIDR]

Kubernetes requires a minimum of 2 CPUs and 2 GiB RAM to initialize the cluster. If you are using t2.mirco instance type (with 1 vCPU and1 Gib RAM), it might throw an error.

To get rid of the error, you can use the above command:

kubeadm init — pod-network-cidr=[Network CIDR] - -ignore-preflight-errors=NumCPU - -ignore-preflight-errors=Mem

For Network CIDR, I am going to use 10.244.0.0/16, you can use any other range.

The output of the above command:

The kubeadm init command gives a join token. We have to use this token in order to connect the slave node with the master node. You can also recreate this token in the master node using the above command:

kubeadm token create - -print-join-command

And once the cluster has been configured, we can also see that the kubelet service has been started:

  • The next step is to set up the kubeconfig file that will help kubectl client command to connect to the cluster.

Use the above commands to do so:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

We can see that we have only one node and it is not ready as of now. In the final step, we have to run the kube-flannel program.

Use the above command to set up the flannel program for the cluster:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

And hence, we can see that the cluster has been configured properly with only one node as of now:

The next part is to configure the slave node.

These are the operations that are same for the master as well as the slave node:

  • Installing Docker
  • Starting and Enabling Docker Service
  • Yum Repo for Kubeadm, Kubectl, and Kubelet
  • Installing Kubeadm, Kubectl, and Kubelet
  • Starting Kubelet service
  • Changing Cgroup driver and restarting docker service
  • Changing IPtables settings
  • Installing iproute-tc

After completing all these steps, the final step for the slave node is to run the join command with the token provided by the master node:

And the slave has joined the cluster successfully.

And finally, run ‘kubectl get nodes’ command on the master node to check the nodes:

And hence, a multi-node Kubernetes cluster with one master and one slave node has been configured.

Now, let us configure the same Kubernetes cluster. The only difference will be:

This time, we will create this cluster using Ansible i.e, through automation.

  • In the very first step, I am going to launch 3 instances: 1 for the master node and 2 for the slaved nodes.

I have created a basic playbook to launch the instances:

  • This playbook will be run in my localhost. The output of the playbook:

And the instances have been created. I have manually given the tag ‘km’ to the master node and ‘ks’ to the slave nodes for better management. The tags can be given while launching the instances we well.

Now, to get the IP of these instances using Ansible, I have used dynamic inventories. You can get dynamic inventories for ec2 in the above link:

Here is the configuration file of ansible:

In order to provide the credentials for my AWS account, I have used to above environment variables:

export AWS_ACCESS_KEY_ID=09WETVRNVEUNBERUY
export AWS_SECRET_ACCESS_KEY=Kdhh8fycaervcq7r9v

All the hosts have been listed:

I have created two ansible roles, one for Kubernetes Master node configuration and one for Kubernetes Slave Node configuration.

Use the above command to initialize the roles:

ansible-galaxy init kube_master
ansible-galaxy init kube_slave

List of tasks for the master node configuration:

  • Tasks for installing required packages, creating yum repo for kubelet, kubectl, and kubeadm and installing them, and IPtables settings:
  • Tasks for copying daemon.json file, starting docker and kubelet service, initializing the cluster, and setting up the kubeconfig file:

Here, I have created a local copy of daemon.json file in the ‘files’ folder of my role and then copied this file to the managed node.

  • Next step: Downloading Flannel plugin, and storing the join token in my local system. I will transfer this token to the slave node while configuring it.

I have used a variable for Pod’s CIDR:

These were the tasks for the master node. When we talk about the slave node, the tasks from ‘Installing Required Packages’ till ‘Starting Kubelet Service’ will be the same for the slave nodes as well.

  • Here are the extra tasks for the slave nodes i.e, copying the join token to the slave nodes and joining the cluster:

I have created a local copy of daemon.json file in the slave’s role as well.
Here is the content of that file:

  • After the configuration of both the roles, the next and the last step is to create a playbook and run it:

This playbook will configure that instance as a master node which will be having the tag as ‘km’ and it will configure the instances as slave node that will be having tags as ‘ks’.

Here is the output after running the playbook:

The output of the playbook clearly says that the cluster has been configured. Let us confirm the same by logging into the master node:

And hence, the multi-node Kubernetes cluster has been configured successfully on the AWS cloud using Ansible.

In this blog, I have tried my best to make you understand each and every aspect of configuring the cluster.

You can follow me on LinkedIn as well as on Medium for similar exciting blogs. Feel free to DM me on LinkedIn in case of any queries.

Below is my LinkedIn profile:

I have also uploaded both the roles on Github, here is the link:

I hope you liked the above article.

Have a good day :)

--

--

Shubham Mehta
Shubham Mehta

Written by Shubham Mehta

I am an active learner who likes to challenge every problem with a can-do mindset in order to make any idea a reality.