Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titleGit commands
collapsetrue
# Current repo tag
$ git describe --tagstag
v2.13.1-115-g68d18daf

# List of all tags
$ git tag
...
v2.13.1	# current version
v2.13.2
v2.13.3
v2.13.4
v2.14.0
v2.14.1
v2.14.2
v2.15.0	# final version

# Edit the tag
$ git checkout v2.13.2
Previous HEAD position was a923f4e7 Update kube_version_min_required and cleanup hashes for release (#7160)
HEAD is now at 3d6b9d6c Update hashes and set default to 1.17.7 (#6286)

Therefore, update the cluster with the command at the beginning of the page, based on starting with the v2.13.2 tag and so on up to the v2.15.0 tag.  It It is inevitable that in the updating phase, from time to time, small manual interventions may be necessary.

Appendix

It can be instructive to analyze what happens in the cluster during the update. Then run the update command from the SA and, in another terminal connected to a cluster node, watch live what happens inside it. Run the command

Code Block
languagebash
titleUpdate in progress...
collapsetrue
$ watch -x kubectl get pod,node -o wide -A
# The following screen will appear, which updates periodically
Every 2.0s: kubectl get pod,node -o wide -A                                            node1: Tue Mar  9 17:18:01 2021
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE   IP               NODE
default       pod/netchecker-agent-hostnet-l6s5x             1/1     Running   0          25h   192.168.100.18   node1
default       pod/netchecker-agent-hostnet-rf5jl             1/1     Running   0          25h   192.168.100.23   node2
default       pod/netchecker-agent-hostnet-sc5h7             1/1     Running   0          25h   192.168.100.25   node3
default       pod/netchecker-agent-kqsz7                     1/1     Running   0          25h   10.233.90.3      node1
default       pod/netchecker-agent-lp5pf                     1/1     Running   0          25h   10.233.92.2      node3
default       pod/netchecker-agent-z7vb5                     1/1     Running   0          25h   10.233.96.2      node2
default       pod/netchecker-server-f98789d55-xr6n9          1/1     Running   2          24h   10.233.96.8      node2
kube-system   pod/calico-kube-controllers-596bd759d5-x2zqc   1/1     Running   0          24h   192.168.100.23   node2
kube-system   pod/calico-node-772q2                          1/1     Running   0          25h   192.168.100.23   node2
kube-system   pod/calico-node-lnh5z                          1/1     Running   0          25h   192.168.100.25   node3
kube-system   pod/calico-node-zcqjh                          1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod/coredns-657959df74-7289c                   1/1     Running   0          24h   10.233.96.7      node2
kube-system   pod/coredns-657959df74-rtl2d                   1/1     Running   0          24h   10.233.90.4      node1
kube-system   pod/dns-autoscaler-b5c786945-brq6n             1/1     Running   0          24h   10.233.90.5      node1
kube-system   pod/kube-apiserver-node1                       1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod/kube-controller-manager-node1              1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod/kube-proxy-67lvh                           1/1     Running   0          24h   192.168.100.18   node1
kube-system   pod/kube-proxy-whqwb                           1/1     Running   0          24h   192.168.100.25   node3
kube-system   pod/kube-proxy-zs6kf                           1/1     Running   0          24h   192.168.100.23   node2
kube-system   pod/kube-scheduler-node1                       1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod/metrics-server-5cd75b7749-d2594            2/2     Running   0          24h   10.233.90.6      node1
kube-system   pod/nginx-proxy-node2                          1/1     Running   0          25h   192.168.100.23   node2
kube-system   pod/nginx-proxy-node3                          1/1     Running   0          25h   192.168.100.25   node3
kube-system   pod/nodelocaldns-hj5t8                         1/1     Running   0          25h   192.168.100.18   node1
kube-system   pod/nodelocaldns-j7zvh                         1/1     Running   0          25h   192.168.100.23   node2
kube-system   pod/nodelocaldns-jqbx7                         1/1     Running   0          25h   192.168.100.25   node3

NAMESPACE   NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE
            node/node1   Ready    control-plane,master   25h   v1.20.4   192.168.100.18   <none>        CentOS Linux 8
            node/node2   Ready    <none>                 25h   v1.20.4   192.168.100.23   <none>        CentOS Linux 8
            node/node3   Ready    <none>                 25h   v1.20.4   192.168.100.25   <none>        CentOS Linux 8

The nodes are not updated at the same time, but in turn. The node being updated changes its STATUS to Ready, SchedulingDisabled. As long as it remains in this state, you will notice that all the Pods implemented on it are eliminated and moved to the other available nodes. Once the update is finished, it will return to Ready and move on to the next node.