Configuring Ansible

Certain settings in Ansible are adjustable via a configuration file (ansible.cfg). To make it faster and easier to use the playbooks from the command line you can, for example, apply the following changes to the configuration file

ansible.cfg
[defaults]
inventory= ./inventory/mycluster/hosts.yaml
private_key_file = /home/centos/.ssh/id_rsa
[privilege_escalation]
become = true
become_method = sudo
become_user = root

By introducing these changes in the ansible.cfg file, you can launch the playbooks encountered in the previous pages with the simple command ansible-playbook <playbook.yaml>. Find a complete list of parameters useful for configuration in the official Ansible documentation.

How the cluster is upgraded

It can be instructive to analyze what happens in the cluster during the update. Then run the update command from the SA and, in another terminal connected to a cluster node, watch live what happens inside it. Run the command

Update in progress...
$ watch -x kubectl get pod,node -o wide -A

NAMESPACE       NAME                                              READY   STATUS    RESTARTS         AGE     IP                NODE      NOMINATED NODE   READINESS GATES
ingress-nginx   pod/ingress-nginx-controller-d6vp7                1/1     Running   0                12d     10.233.97.26      master1   <none>           <none>
kube-system     pod/calico-kube-controllers-744ccf69c7-rjwk2      1/1     Running   0                5d21h   10.233.110.40     worker1   <none>           <none>
kube-system     pod/calico-node-4f8n7                             1/1     Running   0                12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/calico-node-6wkqb                             1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/calico-node-9ptsg                             1/1     Running   0                12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/calico-node-b64qh                             1/1     Running   0                12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/calico-node-lh7p2                             1/1     Running   0                12d     192.168.100.24    master1   <none>           <none>
kube-system     pod/coredns-645b46f4b6-w4v57                      1/1     Running   0                5d21h   10.233.98.24      master2   <none>           <none>
kube-system     pod/coredns-645b46f4b6-zmqwk                      1/1     Running   0                5d21h   10.233.97.30      master1   <none>           <none>
kube-system     pod/dns-autoscaler-7f7b458498-kv2k9               1/1     Running   0                5d21h   10.233.97.28      master1   <none>           <none>
kube-system     pod/kube-apiserver-master1                        1/1     Running   15               12d     192.168.100.24    master1   <none>           <none>
kube-system     pod/kube-apiserver-master2                        1/1     Running   11 (36h ago)     12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/kube-controller-manager-master1               1/1     Running   17               12d     192.168.100.24    master1   <none>           <none>
kube-system     pod/kube-controller-manager-master2               1/1     Running   14 (36h ago)     12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/kube-proxy-dtsjz                              1/1     Running   0                5d21h   192.168.100.94    worker3   <none>           <none>
kube-system     pod/kube-proxy-ft984                              1/1     Running   0                5d21h   192.168.100.206   worker1   <none>           <none>
kube-system     pod/kube-proxy-hht9g                              1/1     Running   0                5d21h   192.168.100.24    master1   <none>           <none>
kube-system     pod/kube-proxy-nqbw5                              1/1     Running   0                5d21h   192.168.100.102   master2   <none>           <none>
kube-system     pod/kube-proxy-z6mzs                              1/1     Running   0                5d21h   192.168.100.190   worker2   <none>           <none>
kube-system     pod/kube-scheduler-master1                        1/1     Running   12 (4d12h ago)   12d     192.168.100.24    master1   <none>           <none>
kube-system     pod/kube-scheduler-master2                        1/1     Running   14 (36h ago)     12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/kubernetes-dashboard-5c5f5d4547-mxgmp         1/1     Running   0                5d21h   10.233.103.30     worker2   <none>           <none>
kube-system     pod/kubernetes-metrics-scraper-756f68fffd-dqcmk   1/1     Running   0                5d21h   10.233.110.39     worker1   <none>           <none>
kube-system     pod/metrics-server-69d9447b96-td7z7               1/1     Running   0                5d21h   10.233.103.31     worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker1                           1/1     Running   0                12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nginx-proxy-worker2                           1/1     Running   0                12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker3                           1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-64678                            1/1     Running   6 (9d ago)       12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nodelocaldns-d6c5r                            1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-l6ms9                            1/1     Running   2 (9d ago)       12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/nodelocaldns-q2dm9                            1/1     Running   1 (12d ago)      12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nodelocaldns-r9m9q                            1/1     Running   0                12d     192.168.100.24    master1   <none>           <none>

NAMESPACE   NAME           STATUS   ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                 CONTAINER-RUNTIME
            node/master1   Ready    control-plane,master   252d   v1.26.5   192.168.100.24    <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/master2   Ready    control-plane,master   252d   v1.26.5   192.168.100.102   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker1   Ready    worker                 252d   v1.26.5   192.168.100.206   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker2   Ready    worker                 252d   v1.26.5   192.168.100.190   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker3   Ready    worker                 252d   v1.26.5   192.168.100.94    <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1

The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS to Ready, SchedulingDisabled. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes (i.e. the node is in Drain state). Once the update is finished, it will return to Ready and move on to the next node.

Ansible tags

There is a quick way to update only a single aspect of our cluster. Thanks to the tags, we can launch the playbook cluster.yml, which will only update a specific part of the configuration. Let's suppose we want to change the configuration of the ingress, present in the addons.yml file. We make our modification and then, instead of running the playbook upgrade-cluster.yml, we use the command

Tags (example 1)
$ ansible-playbook cluster.yml --tags ingress-controller

With the --skip-tags flag, instead, it is possible to skip processes. In this example, there is a command to filter and apply only DNS configuration tasks and skip everything else related to host OS configuration and downloading images of containers

Tags (example 2)
$ ansible-playbook cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os

This significantly reduces processing times. The complete list of tags defined in the playbooks can be found here.

Note

Use --tags and --skip-tags wise and only if you're 100% sure what you're doing.


  • No labels