Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titleUpdate in progress...
collapsetrue
$ watch -x kubectl get pod,node -o wide -A
#
NAMESPACE The following screen will appear, which updatesNAME periodically
Every 2.0s: kubectl get pod,node -o wide -A                                      READY   STATUS   node1: TueRESTARTS Mar  9 17:18:01 2021
NAMESPACE    AGE NAME    IP                NODE      NOMINATED NODE   READINESS GATES
ingress-nginx     pod/ingress-nginx-controller-d6vp7       READY   STATUS    RESTARTS  1/1 AGE   IP Running   0           NODE
default     12d  pod/netchecker-agent-hostnet-l6s5x   10.233.97.26      master1   <none> 1/1     Running   0  <none>
kube-system     pod/calico-kube-controllers-744ccf69c7-rjwk2   25h   192.168.100.181/1   node1
default  Running   0  pod/netchecker-agent-hostnet-rf5jl             1/1 5d21h    Running10.233.110.40   0  worker1   <none>     25h   192.168.100.23   node2
default  <none>
kube-system     pod/netcheckercalico-agentnode-hostnet-sc5h74f8n7                             1/1     Running   0          25h      12d     192.168.100.25206   worker1 node3
default  <none>     pod/netchecker-agent-kqsz7      <none>
kube-system     pod/calico-node-6wkqb          1/1     Running   0          25h 1/1  10.233.90.3   Running   0   node1
default       pod/netchecker-agent-lp5pf      12d     192.168.100.94    worker3   <none>   1/1     Running   0<none>
kube-system     pod/calico-node-9ptsg     25h   10.233.92.2      node3
default       pod/netchecker-agent-z7vb5        1/1     Running   0         1/1     Running  12d 0    192.168.100.102   master2   25h<none>   10.233.96.2      node2
default  <none>
kube-system     pod/netcheckercalico-servernode-f98789d55-xr6n9b64qh          1/1     Running   2          24h   10.233.96.81/1     Running   0        node2
kube-system   pod/calico-kube-controllers-596bd759d5-x2zqc   1/1  12d   Running  192.168.100.190 0  worker2   <none>     24h   192.168.100.23   node2<none>
kube-system     pod/calico-node-772q2lh7p2                             1/1     Running   0                12d  25h   192.168.100.2324    master1   <none>           node2<none>
kube-system     pod/calicocoredns-node-lnh5z    645b46f4b6-w4v57                      1/1     Running   0          25h   192.168.100.25   node3
kube-system5d21h   pod/calico-node-zcqjh10.233.98.24      master2   <none>           <none>
kube-system      1/1pod/coredns-645b46f4b6-zmqwk     Running   0          25h   192.168.100.18   node1
kube-system1/1    pod/coredns-657959df74-7289c Running   0               1/1 5d21h   10.233.97.30 Running   0  master1   <none>     24h   10.233.96.7      node2<none>
kube-system     pod/corednsdns-autoscaler-657959df74-rtl2d    7f7b458498-kv2k9               1/1     Running   0                24h5d21h   10.233.90.497.28      master1   node1
kube-system<none>   pod/dns-autoscaler-b5c786945-brq6n        <none>
kube-system     1/1pod/kube-apiserver-master1     Running   0          24h   10.233.90.5   1/1   node1
kube-system  Running pod/kube-apiserver-node1  15               12d      1/1192.168.100.24    master1  Running <none>  0          25h   192.168.100.18   node1<none>
kube-system     pod/kube-controller-manager-node1apiserver-master2                        1/1     Running   011 (36h ago)      12d  25h   192.168.100.18102   node1
kube-systemmaster2   pod/kube-proxy-67lvh<none>              <none>
kube-system     pod/kube-controller-manager-master1               1/1     Running   17    0           12d  24h   192.168.100.1824    master1   <none>           node1<none>
kube-system     pod/kube-controller-proxymanager-whqwbmaster2               1/1     Running   14 (36h ago)  1/1   12d  Running   0192.168.100.102   master2   <none>     24h   192.168.100.25   node3<none>
kube-system     pod/kube-proxy-zs6kfdtsjz                              1/1     Running   0                24h5d21h   192.168.100.2394   node2
kube-system worker3  pod/kube-scheduler-node1 <none>           <none>
kube-system     pod/kube-proxy-ft984       1/1     Running   0          25h   192.168.100.18   node1
kube-system1/1    pod/metrics-server-5cd75b7749-d2594  Running   0        2/2     Running   05d21h   192.168.100.206   worker1   <none> 24h   10.233.90.6       node1<none>
kube-system     pod/nginxkube-proxy-node2hht9g                              1/1     Running   0              25h  5d21h   192.168.100.2324    master1   <none>           node2<none>
kube-system     pod/nginxkube-proxy-node3nqbw5                              1/1     Running   0          25h     192 5d21h   192.168.100.25102   master2   <none>           node3<none>
kube-system     pod/nodelocaldns-hj5t8kube-proxy-z6mzs                              1/1     Running   0                25h5d21h   192.168.100.18190   node1
kube-systemworker2   pod/nodelocaldns-j7zvh<none>           <none>
kube-system     pod/kube-scheduler-master1         1/1     Running   0       1/1   25h  Running 192.168.100.23   node2
kube-system   pod/nodelocaldns-jqbx7 12 (4d12h ago)   12d     192.168.100.24    master1   <none>            1/1<none>
kube-system      Runningpod/kube-scheduler-master2     0          25h   192.168.100.25   node3
 
NAMESPACE  1/1 NAME    Running   14 (36h STATUSago)   ROLES  12d     192.168.100.102   master2   <none>     AGE   VERSION   INTERNAL<none>
kube-IPsystem      EXTERNAL-IP   OS-IMAGEpod/kubernetes-dashboard-5c5f5d4547-mxgmp         1/1     Running   0                5d21h   10.233.103.30     worker2   <none>           <none>
kube-system     pod/kubernetes-metrics-scraper-756f68fffd-dqcmk   1/1     Running   0                5d21h   10.233.110.39     worker1   <none>           <none>
kube-system     pod/metrics-server-69d9447b96-td7z7               1/1     Running   0                5d21h   10.233.103.31     worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker1                           1/1     Running   0                12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nginx-proxy-worker2                           1/1     Running   0                12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nginx-proxy-worker3                           1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-64678                            1/1     Running   6 (9d ago)       12d     192.168.100.190   worker2   <none>           <none>
kube-system     pod/nodelocaldns-d6c5r                            1/1     Running   0                12d     192.168.100.94    worker3   <none>           <none>
kube-system     pod/nodelocaldns-l6ms9                            1/1     Running   2 (9d ago)       12d     192.168.100.102   master2   <none>           <none>
kube-system     pod/nodelocaldns-q2dm9                            1/1     Running   1 (12d ago)      12d     192.168.100.206   worker1   <none>           <none>
kube-system     pod/nodelocaldns-r9m9q                            1/1     Running   0                12d     192.168.100.24    master1   <none>           <none>

NAMESPACE   NAME           STATUS   ROLES                  AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                 CONTAINER-RUNTIME
            node/master1   Ready    control-plane,master   252d   v1.26.5   192.168.100.24    <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/master2   Ready    control-plane,master   252d   v1.26.5   192.168.100.102   <none>        Rocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/node1worker1   Ready    worker          Ready    control-plane,master   25h252d   v1.2026.45   192.168.100.18206   <none>        CentOSRocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/node2worker2   Ready    <none>worker                 25h252d   v1.2026.45   192.168.100.23190   <none>        CentOSRocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1
            node/node3worker3   Ready    <none>worker                 25h252d   v1.2026.45   192.168.100.2594    <none>        CentOSRocky Linux 8.8 (Green Obsidian)   4.18.0-477.27.1.el8_8.x86_64   containerd://1.7.1

The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS to Ready, SchedulingDisabled. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes (i.e. the node is in Drain state). Once the update is finished, it will return to to Ready and move on to the next node.

Ansible tags

There is a quick way to update only a single aspect of our cluster. Thanks to the tags, we can launch the playbook cluster.yml, which will only update a specific part of the configuration. Let's suppose we want to change the configuration of the ingress, present in the addons.yml file. We make our modification and then, instead of running the playbook upgrade-cluster.yml, we use the command

...