Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titleUpdate in progress...
collapsetrue
$ watch -x kubectl get pod,node -o wide -A
#
NAMESPACE The following screen will appear, which updatesNAME periodically
Every 2.0s: kubectl get pod,node -o wide -A                                      READY   STATUS   node1: TueRESTARTS Mar  9 17:18:01 2021
NAMESPACE     NAMEAGE     IP                NODE      NOMINATED NODE   READINESS GATES
ingress-nginx   pod/ingress-nginx-controller-d6vp7        READY   STATUS    RESTARTS 1/1  AGE   IPRunning   0            NODE
default    12d   pod/netchecker-agent-hostnet-l6s5x  10.233.97.26      master1   <none>  1/1     Running   0 <none>
kube-system     pod/calico-kube-controllers-744ccf69c7-rjwk2    25h   192.168.100.181/1    node1
default Running   0   pod/netchecker-agent-hostnet-rf5jl             1/15d21h   10.233.110.40  Running   0worker1    <none>      25h   192.168.100.23   node2<none>
defaultkube-system       pod/netcheckercalico-agentnode-hostnet-sc5h74f8n7             1/1     Running   0         1/1 25h   192.168.100.25 Running  node3
default   0    pod/netchecker-agent-kqsz7            12d     192.168.100.206   worker1 1/1  <none>   Running   0     <none>
kube-system     25hpod/calico-node-6wkqb   10.233.90.3      node1
default       pod/netchecker-agent-lp5pf             1/1     Running   1/10     Running   0        12d  25h   10192.233168.92100.294    worker3  node3
default <none>      pod/netchecker-agent-z7vb5     <none>
kube-system     pod/calico-node-9ptsg            1/1      Running   0        1/1  25h   10.233.96.2Running   0   node2
default       pod/netchecker-server-f98789d55-xr6n9      12d    1/1  192.168.100.102   master2 Running  <none> 2          24h<none>
kube-system   10.233.96.8      node2
kube-system   pod/calico-kube-controllers-596bd759d5-x2zqcnode-b64qh   1/1     Running   0          24h   192.168.100.23   node2
kube-system   pod/calico-node-772q21/1     Running   0                12d  1/1   192.168.100.190  Running worker2  0 <none>         25h   192.168.100.23   node2<none>
kube-system     pod/calico-node-lnh5zlh7p2                             1/1     Running   0                25h12d     192.168.100.2524   node3
kube-system  master1 pod/calico-node-zcqjh  <none>           <none>
kube-system     pod/coredns-645b46f4b6-w4v57        1/1     Running   0      1/1    25h Running  192.168.100.18 0  node1
kube-system   pod/coredns-657959df74-7289c           5d21h   10.233.98.24     1/1 master2   <none> Running   0       <none>
kube-system   24h   10.233.96.7      node2
kube-system   pod/coredns-657959df74-rtl2d645b46f4b6-zmqwk                      1/1     Running   0                24h5d21h   10.233.9097.430      node1
kube-systemmaster1   pod/dns-autoscaler-b5c786945-brq6n<none>             1/1<none>
kube-system      Runningpod/dns-autoscaler-7f7b458498-kv2k9    0          24h 1/1  10.233.90.5   Running   node1
kube-system0   pod/kube-apiserver-node1             5d21h   10.233.97.28      master1 1/1  <none>   Running   0      <none>
kube-system     25hpod/kube-apiserver-master1    192.168.100.18   node1
kube-system   pod/kube-controller-manager-node1              1/1     Running   15  0             12d  25h   192.168.100.1824    node1
kube-systemmaster1   pod/kube-proxy-67lvh<none>           <none>
kube-system     pod/kube-apiserver-master2                        1/1     Running   011 (36h ago)      12d  24h   192.168.100.18102   node1
kube-systemmaster2   pod/kube-proxy-whqwb<none>           <none>
kube-system       pod/kube-controller-manager-master1               1/1     Running   17       0        12d  24h   192.168.100.2524    master1   <none> node3
kube-system   pod/kube-proxy-zs6kf       <none>
kube-system     pod/kube-controller-manager-master2               1/1     Running   014 (36h ago)      12d  24h   192.168.100.23102   master2   <none>           node2<none>
kube-system     pod/kube-scheduler-node1proxy-dtsjz                              1/1     Running   0                25h5d21h   192.168.100.1894    node1
kube-systemworker3   pod/metrics-server-5cd75b7749-d2594<none>            2/2<none>
kube-system     Runningpod/kube-proxy-ft984   0          24h   10.233.90.6      node1
kube-system   pod/nginx-proxy-node2      1/1     Running   0            1/1    5d21h Running  192.168.100.206 0  worker1   <none>     25h   192.168.100.23   node2<none>
kube-system     pod/nginxkube-proxy-node3hht9g                              1/1     Running   0                25h5d21h   192.168.100.2524   node3
kube-system   pod/nodelocaldns-hj5t8 master1   <none>           <none>
kube-system      pod/kube-proxy-nqbw5     1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod1/nodelocaldns-j7zvh1     Running   0                5d21h 1/1  192.168.100.102   Runningmaster2   0<none>          25h <none>
kube-system  192.168.100.23   node2
pod/kube-proxy-systemz6mzs     pod/nodelocaldns-jqbx7                         1/1     Running   0                25h5d21h   192.168.100.25190   worker2 node3
 
NAMESPACE <none>  NAME         STATUS<none>
kube-system   ROLES  pod/kube-scheduler-master1                AGE   VERSION   INTERNAL-IP  1/1    EXTERNAL-IP Running  OS-IMAGE
 12 (4d12h ago)   12d      node/node1192.168.100.24    Readymaster1   <none> control-plane,master   25h   v1.20.4   192.168.100.18  <none>
kube-system  <none>   pod/kube-scheduler-master2     CentOS Linux 8
            node/node2   Ready  1/1  <none>   Running   14 (36h ago)         25h 12d  v1.20.4   192.168.100.23102   master2   <none>         CentOS Linux 8<none>
kube-system     pod/kubernetes-dashboard-5c5f5d4547-mxgmp         node1/node31   Ready  Running  <none> 0                25h5d21h   v110.20.4233.103.30     192.168.100.25worker2   <none>           CentOS Linux 8

...

<none>
kube-system     pod/kubernetes-metrics-scraper-756f68fffd-dqcmk   1/1     Running   0                5d21h   10.233.110.39     worker1   <none>           <none>
kube-system     pod/metrics-server-69d9447b96-td7z7               1/1     Running   0                5d21h   10.233.103.31     worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker1                           1/1     Running   0                12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nginx-proxy-worker2                           1/1     Running   0                12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker3                           1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-64678                            1/1     Running   6 (9d ago)       12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nodelocaldns-d6c5r                            1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-l6ms9                            1/1     Running   2 (9d ago)       12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/nodelocaldns-q2dm9                            1/1     Running   1 (12d ago)      12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nodelocaldns-r9m9q                            1/1     Running   0                12d     192.168.100.24    master1   <none>           <none>

NAMESPACE   NAME           STATUS   ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                 CONTAINER-RUNTIME
            node/master1   Ready    control-plane,master   252d   v1.26.5   192.168.100.24    <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/master2   Ready    control-plane,master   252d   v1.26.5   192.168.100.102   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker1   Ready    worker                 252d   v1.26.5   192.168.100.206   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker2   Ready    worker                 252d   v1.26.5   192.168.100.190   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/worker3   Ready    worker                 252d   v1.26.5   192.168.100.94    <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1

The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS to Ready, SchedulingDisabled. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes (i.e. the node is in Drain state). Once the update is finished, it will return to Ready and move on to the next node.

Ansible tags

There is a quick way to update only a single aspect of our cluster. Thanks to the tags, we can launch the playbook cluster.yml, which will only update a specific part of the configuration. Let's suppose we want to change the configuration of the ingress, present in the addons.yml file. We make our modification and then, instead of running the playbook upgrade-cluster.yml, we use the command

Code Block
languagebash
titleTags (example 1)
$ ansible-playbook cluster.yml --tags ingress-controller

With the --skip-tags flag, instead, it is possible to skip processes. In this example, there is a command to filter and apply only DNS configuration tasks and skip everything else related to host OS configuration and downloading images of containers

Code Block
languagebash
titleTags (example 2)
$ ansible-playbook cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os

This significantly reduces processing times. The complete list of tags defined in the playbooks can be found here.

Note
titleNote

Use --tags and --skip-tags wise and only if you're 100% sure what you're doing

...

.