Now let's start building a real, albeit simple, Kubernetes cluster. We begin to give some definitions, useful to clarify ideas:

  • Kubeadm: A tool for quickly installing Kubernetes and setting up a secure cluster. You can use kubeadm to install both the control plane and the worker node components.

  • Kubectl: a command line tool for communicating with a Kubernetes API server. You can use kubectl to create, inspect, update, and delete Kubernetes objects.

  • Kubelet: an agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

Preliminary steps

First, there are some minimum requirements to be met and steps to take before proceeding with the installation of Kubeadm:

  • CentOS 7 (minimum supported version);
  • at least 2 GB of RAM and 2 CPUs per machine;
  • opening doors on the control plane (6443, 2379-2380, 10250-10252) and on the worker (10250, 30000-32767);
  • uniqueness of MAC address and product_uuid for each node;
  • complete connectivity between the cluster nodes;
  • swap disabled on nodes.

Note

It's instructive to know which are the standard ports used by k8s and on which nodes (master, worker or, as we will see later, etcd) they must be opened. If you are using VMs instantiated thanks to OpenStack, it is not necessary to open these ports, because the machines communicate freely via their internal network.

Now let's do a little study for the last 3 points of the list. Let's start with MAC address and product_uuid of the cluster nodes, making sure they are different from each other

MAC address and product_uuid
# Equivalent commands to get MAC address (type format similar to "fa:16:3d:c9:ac:83")
$ ip link
$ ifconfig -a

# Command to get the product_uuid (type format similar to "92DD146C-F404-4253-J518-49602Z7C1B8F")
$ sudo cat /sys/class/dmi/id/product_uuid

About complete connectivity, make sure the br_netfilter module has been loaded

br_netfilter module
# Verify that the br_netfilter module is present (you should get an output like the following)
$ lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

# If not present, use
$ sudo modprobe br_netfilter

In order for a Linux node's iptables to correctly view bridged traffic (see here), verify that net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config

iptables
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

The swap is probably not active on our machines. We can see this by looking at the output of the command

Swap
$ free -h
              total        used        free      shared  buff/cache   available
Mem:           3.7G        1.2G        223M         26M        2.3G        2.2G
Swap:            0B          0B          0B

The swap values should all be 0 bytes. If not, comment on the swap line of the /etc/fstab file and reboot. In this way the swap is permanently deactivated.

Installation

Installing CRI

By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime. So, you need to install a CRI into each node in the cluster, in order that Pods can run there. Common CRI with Kubernetes, on Linux, are: containerd, CRI-O and Docker. We will focus on the latter. First, therefore, install Docker on each of your nodes (install Docker on CentOS).

After installation, create the following .json file in the given path to set up the Docker daemon

daemon.json
$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {"max-size": "100m"},
  "storage-driver": "overlay2",
  "storage-opts": ["overlay2.override_kernel_check=true"]
}
EOF

Finally, create the docker.service.d folder and restart Docker

Restart Docker
$ sudo mkdir -p /etc/systemd/system/docker.service.d

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

# If you want the docker service to start on boot, run the following command
$ sudo systemctl enable docker

Installing kubeadm, kubelet and kubectl

At this point we are ready with the installation of Kubeadm, Kubectl and Kubelet on all VM of the cluster (procedure valid, as well as for CentOS, also for RedHat and Fedora)

Kubeadm, Kubectl and Kubelet
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
$ sudo systemctl enable --now kubelet
  • No labels