Configuring Ansible
How the cluster is upgraded
It can be instructive to analyzewhathappens in the cluster during the update. Then run the update command from the SA and, in another terminal connected to a cluster node, watch live what happens inside it. Run the commandCertain settings in Ansible are adjustable via a configuration file (ansible.cfg
). To make it faster and easier to use the playbooks from the command line you can, for example, apply the following changes to the configuration file
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
[defaults]
inventory= ./inventory/mycluster/hosts.yaml
private_key_file = /home/centos/.ssh/id_rsa
[privilege_escalation]
become = true
become_method = sudo
become_user = root |
By introducing these changes in the ansible.cfg
file, you can launch the playbooks encountered in the previous pages with the simple command ansible-playbook <playbook.yaml>
. Find a complete list of parameters useful for configuration in the official Ansible documentation.
How the cluster is upgraded
It can be instructive to analyze what happens in the cluster during the update. Then run the update command from the SA and, in another terminal connected to a cluster node, watch live what happens inside it. Run the command
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ watch -x kubectl get pod,node -o wide -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx pod/ingress-nginx-controller-d6vp7 1/1 Running 0 12d 10.233.97.26 master1 <none> <none> kube-system pod/calico-kube-controllers-744ccf69c7-rjwk2 1/1 Running 0 5d21h 10.233.110.40 worker1 <none> <none> kube-system pod/calico-node-4f8n7 1/1 Running 0 12d 192.168.100.206 worker1 <none> <none> kube-system pod/calico-node-6wkqb 1/1 Running 0 12d 192.168.100.94 worker3 <none> <none> kube-system pod/calico-node-9ptsg 1/1 Running 0 12d 192.168.100.102 master2 <none> <none> kube-system pod/calico-node-b64qh 1/1 Running 0 12d 192.168.100.190 worker2 <none> <none> kube-system pod/calico-node-lh7p2 1/1 Running 0 12d 192.168.100.24 master1 <none> <none> kube-system pod/coredns-645b46f4b6-w4v57 1/1 Running 0 5d21h 10.233.98.24 master2 <none> <none> kube-system pod/coredns-645b46f4b6-zmqwk 1/1 Running 0 5d21h 10.233.97.30 master1 <none> <none> kube-system pod/dns-autoscaler-7f7b458498-kv2k9 1/1 Running 0 5d21h 10.233.97.28 master1 <none> <none> kube-system pod/kube-apiserver-master1 1/1 Running 15 12d 192.168.100.24 master1 <none> <none> kube-system pod/kube-apiserver-master2 1/1 Running 11 (36h ago) 12d 192.168.100.102 master2 <none> <none> kube-system pod/kube-controller-manager-master1 1/1 Running 17 12d 192.168.100.24 master1 <none> <none> kube-system pod/kube-controller-manager-master2 1/1 Running 14 (36h ago) 12d 192.168.100.102 master2 <none> <none> kube-system pod/kube-proxy-dtsjz 1/1 Running 0 5d21h 192.168.100.94 worker3 <none> <none> kube-system pod/kube-proxy-ft984 1/1 Running 0 $ watch -x kubectl get pod,node -o wide -A # The following screen will appear, which updates periodically Every 2.0s: kubectl get pod,node -o wide -A 5d21h 192.168.100.206 worker1 <none> <none> kube-system pod/kube-proxy-hht9g node1: Tue Mar 9 17:18:01 2021 NAMESPACE NAME 1/1 Running 0 5d21h 192.168.100.24 master1 READY <none> STATUS RESTARTS AGE IP<none> kube-system pod/kube-proxy-nqbw5 NODE default pod/netchecker-agent-hostnet-l6s5x 1/1 Running 0 25h 5d21h 192.168.100.18102 node1 defaultmaster2 pod/netchecker-agent-hostnet-rf5jl 1/1<none> Running 0 <none> kube-system pod/kube-proxy-z6mzs 25h 192.168.100.23 node2 default pod/netchecker-agent-hostnet-sc5h7 1/1 Running 0 25h 5d21h 192.168.100.25.100.190 worker2 <none> node3 default <none> kube-system pod/netcheckerkube-agent-kqsz7scheduler-master1 1/1 Running 12 (4d12h ago) 12d 0 192.168.100.24 master1 <none> 25h 10.233.90.3 node1 default <none> kube-system pod/netcheckerkube-agent-lp5pfscheduler-master2 1/1 Running 014 (36h ago) 12d 25h 10192.233168.92.2100.102 master2 <none> node3 default <none> kube-system pod/netchecker-agent-z7vb5kubernetes-dashboard-5c5f5d4547-mxgmp 1/1 Running 0 1/1 Running 5d21h 0 10.233.103.30 worker2 25h<none> 10.233.96.2 node2 default <none> kube-system pod/netcheckerkubernetes-metrics-serverscraper-f98789d55756f68fffd-xr6n9dqcmk 1/1 Running 1/1 0 Running 2 24h5d21h 10.233.96110.839 worker1 node2 kube-system pod/calico-kube-controllers-596bd759d5-x2zqc<none> 1/1 Running <none> kube-system 0 pod/metrics-server-69d9447b96-td7z7 24h 192.168.100.23 node2 kube-system pod/calico-node-772q2 1/1 Running 0 1/1 5d21h Running10.233.103.31 0 worker2 <none> 25h 192.168.100.23 node2<none> kube-system pod/caliconginx-nodeproxy-lnh5zworker1 1/1 Running 0 Running 0 12d 25h 192.168.100.25206 worker1 <none> node3<none> kube-system pod/caliconginx-nodeproxy-zcqjhworker2 1/1 Running 0 25h 12d 192.168.100.18 node1 kube-system pod/coredns-657959df74-7289c190 worker2 <none> <none> kube-system 1/1 pod/nginx-proxy-worker3 Running 0 24h 10.233.96.7 node2 kube-system pod/coredns-657959df74-rtl2d1/1 Running 0 1/1 12d Running 0192.168.100.94 worker3 <none> 24h 10.233.90.4 node1<none> kube-system pod/dns-autoscaler-b5c786945-brq6nnodelocaldns-64678 1/1 Running 0 1/1 24h Running 10.233.90.5 node1 kube-system pod/kube-apiserver-node1 6 (9d ago) 12d 192.168.100.190 worker2 <none> 1/1 Running 0<none> kube-system pod/nodelocaldns-d6c5r 25h 192.168.100.18 node1 kube-system pod/kube-controller-manager-node1 1/1 Running 0 0 12d 25h 192.168.100.1894 worker3 <none> node1<none> kube-system pod/kube-proxy-67lvhnodelocaldns-l6ms9 1/1 Running 02 (9d ago) 12d 24h 192.168.100.18102 master2 <none> node1<none> kube-system pod/kube-proxy-whqwbnodelocaldns-q2dm9 1/1 Running 01 (12d ago) 12d 24h 192.168.100.25206 worker1 <none> node3<none> kube-system pod/kube-proxy-zs6kfnodelocaldns-r9m9q 1/1 Running 0 24h 12d 192.168.100.2324 node2 kube-system master1 pod/kube-scheduler-node1 <none> <none> NAMESPACE NAME 1/1 STATUS Running ROLES 0 25h 192.168.100.18 node1 kube-system AGE pod/metrics-server-5cd75b7749-d2594 VERSION INTERNAL-IP 2/2EXTERNAL-IP OS-IMAGE Running 0 24h 10.233.90.6 node1 kube-system pod/nginx-proxy-node2KERNEL-VERSION CONTAINER-RUNTIME 1/1 node/master1 Running Ready 0 control-plane,master 252d 25hv1.26.5 192.168.100.23 24 node2 kube-system <none> pod/nginx-proxy-node3 Rocky Linux 8.8 (Green Obsidian) 4.18.0-477.27.1.el8_8.x86_64 containerd://1.7.1 1/1 node/master2 Running Ready 0 control-plane,master 252d v1.26.5 25h 192.168.100.25102 node3 kube-system<none> pod/nodelocaldns-hj5t8 Rocky Linux 8.8 (Green Obsidian) 1/14.18.0-477.27.1.el8_8.x86_64 containerd://1.7.1 Running 0 node/worker1 Ready 25h 192.168.100.18 node1 kube-systemworker pod/nodelocaldns-j7zvh 252d v1.26.5 1/1 192.168.100.206 <none> Running 0 Rocky Linux 8.8 (Green Obsidian) 25h 192.168.100.23 node2 kube-system pod/nodelocaldns-jqbx7 4.18.0-477.27.1.el8_8.x86_64 containerd://1.7.1 node/worker2 Ready worker 1/1 Running 0 252d 25hv1.26.5 192.168.100.25190 <none> node3 NAMESPACE Rocky NAMELinux 8.8 (Green Obsidian) 4.18.0-477.27.1.el8_8.x86_64 STATUScontainerd://1.7.1 ROLES node/worker3 Ready worker AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE 252d v1.26.5 192.168.100.94 <none> node/node1 Ready Rocky control-plane,master 25h v1.20.4 192.168.100.18 <none> CentOS Linux 8 node/node2 Ready <none> 25h v1.20.4 192.168.100.23 <none> CentOS Linux 8 node/node3 Ready <none> 25h v1.20.4 192.168.100.25 <none> CentOS Linux 8 |
...
Linux 8.8 (Green Obsidian) 4.18.0-477.27.1.el8_8.x86_64 containerd://1.7.1 |
The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS
to Ready
, SchedulingDisabled
. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes (i.e. the node is in Drain
state). Once the update is finished, it will return to Ready
and move on to the next node.
Ansible tags
There is a quick way to update only a single aspect of our cluster. Thanks to the tags, we can launch the playbook cluster.yml
, which will only update a specific part of the configuration. Let's suppose we want to change the configuration of the ingress, present in the addons.yml
file. We make our modification and then, instead of running the playbook upgrade-cluster.yml
, we use the command
Code Block | ||||
---|---|---|---|---|
| ||||
$ ansible-playbook cluster.yml --tags ingress-controller |
With the --skip-tags
flag, instead, it is possible to skip processes. In this example, there is a command to filter and apply only DNS configuration tasks and skip everything else related to host OS configuration and downloading images of containers
Code Block | ||||
---|---|---|---|---|
| ||||
$ ansible-playbook cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os |
This significantly reduces processing times. The complete list of tags defined in the playbooks can be found here.
Note | ||
---|---|---|
| ||
Use |
...
. |