Configuring Ansible
Certain settings in Ansible are adjustable via a configuration file (ansible.cfg
). To make it faster and easier to use the playbooks from the command line you can, for example, apply the following changes to the configuration file
[defaults]
inventory= ./inventory/mycluster/hosts.yaml
private_key_file = /home/centos/.ssh/id_rsa
[privilege_escalation]
become = true
become_method = sudo
become_user = root
By introducing these changes in the ansible.cfg
file, you can launch the playbooks encountered in the previous pages with the simple command ansible-playbook <playbook.yaml>
. Find a complete list of parameters useful for configuration in the official Ansible documentation.
How the cluster is upgraded
It can be instructive to analyze what happens in the cluster during the update. Then run the update command from the SA and, in another terminal connected to a cluster node, watch live what happens inside it. Run the command
$ watch -x kubectl get pod,node -o wide -A
# The following screen will appear, which updates periodically
Every 2.0s: kubectl get pod,node -o wide -A node1: Tue Mar 9 17:18:01 2021
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default pod/netchecker-agent-hostnet-l6s5x 1/1 Running 0 25h 192.168.100.18 node1
default pod/netchecker-agent-hostnet-rf5jl 1/1 Running 0 25h 192.168.100.23 node2
default pod/netchecker-agent-hostnet-sc5h7 1/1 Running 0 25h 192.168.100.25 node3
default pod/netchecker-agent-kqsz7 1/1 Running 0 25h 10.233.90.3 node1
default pod/netchecker-agent-lp5pf 1/1 Running 0 25h 10.233.92.2 node3
default pod/netchecker-agent-z7vb5 1/1 Running 0 25h 10.233.96.2 node2
default pod/netchecker-server-f98789d55-xr6n9 1/1 Running 2 24h 10.233.96.8 node2
kube-system pod/calico-kube-controllers-596bd759d5-x2zqc 1/1 Running 0 24h 192.168.100.23 node2
kube-system pod/calico-node-772q2 1/1 Running 0 25h 192.168.100.23 node2
kube-system pod/calico-node-lnh5z 1/1 Running 0 25h 192.168.100.25 node3
kube-system pod/calico-node-zcqjh 1/1 Running 0 25h 192.168.100.18 node1
kube-system pod/coredns-657959df74-7289c 1/1 Running 0 24h 10.233.96.7 node2
kube-system pod/coredns-657959df74-rtl2d 1/1 Running 0 24h 10.233.90.4 node1
kube-system pod/dns-autoscaler-b5c786945-brq6n 1/1 Running 0 24h 10.233.90.5 node1
kube-system pod/kube-apiserver-node1 1/1 Running 0 25h 192.168.100.18 node1
kube-system pod/kube-controller-manager-node1 1/1 Running 0 25h 192.168.100.18 node1
kube-system pod/kube-proxy-67lvh 1/1 Running 0 24h 192.168.100.18 node1
kube-system pod/kube-proxy-whqwb 1/1 Running 0 24h 192.168.100.25 node3
kube-system pod/kube-proxy-zs6kf 1/1 Running 0 24h 192.168.100.23 node2
kube-system pod/kube-scheduler-node1 1/1 Running 0 25h 192.168.100.18 node1
kube-system pod/metrics-server-5cd75b7749-d2594 2/2 Running 0 24h 10.233.90.6 node1
kube-system pod/nginx-proxy-node2 1/1 Running 0 25h 192.168.100.23 node2
kube-system pod/nginx-proxy-node3 1/1 Running 0 25h 192.168.100.25 node3
kube-system pod/nodelocaldns-hj5t8 1/1 Running 0 25h 192.168.100.18 node1
kube-system pod/nodelocaldns-j7zvh 1/1 Running 0 25h 192.168.100.23 node2
kube-system pod/nodelocaldns-jqbx7 1/1 Running 0 25h 192.168.100.25 node3
NAMESPACE NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
node/node1 Ready control-plane,master 25h v1.20.4 192.168.100.18 <none> CentOS Linux 8
node/node2 Ready <none> 25h v1.20.4 192.168.100.23 <none> CentOS Linux 8
node/node3 Ready <none> 25h v1.20.4 192.168.100.25 <none> CentOS Linux 8
The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS
to Ready
, SchedulingDisabled
. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes (i.e. the node is in Drain
state). Once the update is finished, it will return to Ready
and move on to the next node.
There is a quick way to update only a single aspect of our cluster. Thanks to the tags, we can launch the playbook cluster.yml
, which will only update a specific part of the configuration. Let's suppose we want to change the configuration of the ingress, present in the addons.yml
file. We make our modification and then, instead of running the playbook upgrade-cluster.yml
, we use the command
$ ansible-playbook cluster.yml --tags ingress-controller
With the --skip-tags
flag, instead, it is possible to skip processes. In this example, there is a command to filter and apply only DNS configuration tasks and skip everything else related to host OS configuration and downloading images of containers
$ ansible-playbook cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
This significantly reduces processing times. The complete list of tags defined in the playbooks can be found here.