After the installation phase, we can set up our cluster (official guide). To do this, simply run as root the following command on the control-plane
# The following command accepts several arguments, but for building a small test cluster let's run it without them $ kubeadm init <args> . . . # At the end of the procedure, an output similar to this will appear Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a Pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Initializing your control-plane node
We are now building a simple cluster, consisting of a control-plane with n workers, so we run the kubeadm init
command with no arguments. If you have plans to upgrade this single control-plane kubeadm
cluster to high availability you should specify the --control-plane-endpoint
to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
The output shown above gives us 3 information:
- offers commands to allow us to work without administrator privileges;
- warns us that we will have to use a Pod network;
- gives us the key, which is advisable to keep somewhere, which we will have to use on worker nodes to insert them into the cluster.
To remove administrator privileges, simply follow the steps in the previous output. Repeat the same commands as root if you also want to use the cluster in administrator mode.
As for the second point in the list, there are multiple CNI (Container Network Interfaces) to generate our Pod network. Here we use Calico, but obviously there are valid alternatives listed at the address shown in the command above (we report the link here). So let's run the command (check the version)
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml # Flannel is an alternative $ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Now let's finally expand our cluster by adding worker nodes. Let's connect to the nodes via SSH and take administrator privileges. Now we paste the command, saved previously, returned to us by kubeadm init
$ kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash> . . . This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
To retrieve the token later, type the command
$ kubeadm token create --print-join-command
Finally, as verification of the worker(s) hooking, we return to the control plane and use
$ kubectl get nodes NAME STATUS ROLES AGE VERSION mycentos-0.novalocal Ready master 30h v1.21.1 mycentos-1.novalocal Ready <none> 25h v1.21.1 mycentos-2.novalocal Ready <none> 24h v1.21.1
The output should list the nodes that are part of the cluster.
Upgrading kubeadm clusters
To update the cluster follow the instructions in the official guide, that explains how to upgrade a Kubernetes cluster created with kubeadm. The upgrade workflow at high level is the following:
- upgrade the primary control plane node;
- upgrade additional control plane nodes, if any;
- upgrade worker nodes.
1 Comment
Anonymous
Oct 29, 2020I've been having some trouble in getting networking between pods up and running. I think this page should mention the need for updating security rules when talking about Calico, or any other CNI. See here: https://docs.projectcalico.org/getting-started/kubernetes/requirements#network-requirements
I was luckier, however, in using both Weave Net and flannel, whose documentations are less fragmented than Calico's, and in my perspective a bit clearer: