All Kubernetes objects are stored on etcd. Periodically backing up the etcd cluster data is important to recover Kubernetes clusters under disaster scenarios, such as losing all control plane nodes. The snapshot file contains all the Kubernetes states and critical information. For more information see the official guide.
Prerequisites
Within the cluster nodes that act only as etcd, the executable files are probably already present. If the files are not present in the etcd node or you want to use the client outside the etcd node/cluster, follow the steps below.
To be able to back up a k8s cluster, first we need the executable file etcdctl
, downloadable from here (choose the appropriate release). Also in the compressed file are two other executables, etcd
and etcdutl
, which may come in handy in the future. After that, unpack the archive file (this results in a directory containing the binaries) and add the executable binaries to your path (i.e. /usr/local/bin
)
# For example, let's download release 3.5.4 $ wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz $ tar xzvf etcd-v3.5.4-linux-amd64.tar.gz # In addition to the etcdctl executable, we also take etcd and etcdutl $ sudo cp etcd-v3.5.4-linux-amd64/etcd* /usr/local/bin/ # Check that everything is OK $ etcdctl version etcdctl version: 3.5.4 API version: 3.5 $ etcdutl version etcdutl version: 3.5.4 API version: 3.5 $ etcd --version etcd Version: 3.5.4 Git SHA: 08407ff76 Go Version: go1.16.15 Go OS/Arch: linux/amd64
Once we have the executable file, we need the certificates to be able to communicate with the etcd node(s). If you don't know the location of the certificates, you can retrieve it using the grep command in the /etc/kubernetes
folder on the master node (the default directory that holds the certificates in the etcd node is /etc/ssl/etcd/ssl
). Save the location of the certificates in the following environment variables
# Insert the following lines inside the ".bashrc" file, then use "source .bashrc" to apply the changes export ETCDCTL_CERT=/<path>/cert.pem export ETCDCTL_CACERT=/<path>/ca.pem export ETCDCTL_KEY=/<path>/key.pem ETCDCTL_ENDPOINTS=etcd1:2379,etcd2:2379,etcd3:2379
Let's try running some commands, to check the status of the etcd cluster
Save and Restore
If you have an etcd cluster, you can select only one node, otherwise you get the error snapshot must be requested to one selected node, not multiple
. Then unset the ETCDCTL_ENDPOINTS
environment variable, if present.
Take a snapshot of the etcd datastore using the following command (official documentation), which generates the <snapshot>
file
$ etcdctl snapshot save <path>/<snapshot> --endpoints=<endpoint>:<port> # Instead of <endpoint> you can substitute a hostname or an IP $ etcdctl snapshot save snapshot.db --endpoints=etcd1:2379 $ etcdctl snapshot save snapshot.db --endpoints=192.168.100.88:2379 # View that the snapshot was successful $ etcdctl snapshot status snapshot.db --write-out=table +----------+----------+------------+------------+ | HASH | REVISION | TOTAL KEYS | TOTAL SIZE | +----------+----------+------------+------------+ | b89543b8 | 40881777 | 54340 | 106 MB | +----------+----------+------------+------------+
To restore a cluster, all that is needed is a single snapshot snapshot.db
file. A cluster restore with etcdctl snapshot restore
creates new etcd data directories; all members should restore using the same snapshot. Restoring overwrites some snapshot metadata (specifically, the member ID and cluster ID); the member loses its former identity. Therefore in order to start a cluster from a snapshot, the restore must start a new logical cluster.
Now we will use the snapshot backup to restore etcd as shown below. If you want to use a specific data directory for the restore, you can add the location using the --data-dir
flag, but the destination directory must be empty and obviously have write permissions.
# Copy the snapshot.db file to all etcd nodes $ scp snapshot.db etcd1: # Repeat this command for all etcd members, to create the directory $ etcdctl snapshot restore <path>/<snapshot> [--data-dir <data_dir>] --name etcd1 --initial-cluster etcd1=https://<IP1>:2380,etcd2=https://<IP2>:2380,etcd3=https://<IP3>:2380 --initial-cluster-token <token> --initial-advertise-peer-urls https://<IP1>:2380 # For instance, for the first node $ etcdctl snapshot restore snapshot.db --name etcd1 --initial-cluster etcd1=https://etcd1:2380,etcd2=https://etcd2:2380,etcd3=https://etcd3:2380 --initial-cluster-token k8s_etcd --initial-advertise-peer-urls https://etcd1:2380
Before we continue, let's stop all the API server instances. Then stop the etcd service on the nodes.
# Let's go to the master(s) and temporarily move the "kube-apiserver.yaml" file $ sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/ # Stop etcd service on etcd node(s) $ sudo systemctl stop etcd.service
As said, the restore
command generates the member
directory, which will be pasted into the path where the etcd node data are stored (the default path is /var/lib/etcd/
)
# Paste the snapshot into the path where the etcd node data are stored $ sudo cp -r <path>/<restore> /var/lib/etcd/ # For each etcd node $ sudo cp -r $HOME/etcd1.etcd/member/ /var/lib/etcd/
Finally, we restart the etcd service on the nodes and restore the API server
# Start etcd service on etcd node(s) $ sudo systemctl start etcd.service # Restore the API server from the master(s) $ sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
It's also recommend restarting any components (e.g. kube-scheduler
, kube-controller-manager
, kubelet
) to ensure that they don't rely on some stale data. Note that in practice, the restore takes a bit of time. During the restoration, critical components will lose leader lock and restart themselves.