...
Once we have the executable file, we need the certificates to be able to communicate with the etcd node(s). If you don't know the location of the certificates, you can retrieve it using the grep command in the /etc/kubernetes
folder on the master node (the default directory that holds the certificates in the etcd node is /etc/ssl/etcd/ssl
). Save the location of the certificates in the following environment variables
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ etcdctl snapshot save <snapshot> --endpoints=<endpoint>:<port> # Instead of <endpoint> you can substitute a hostname or an IP $ etcdctl snapshot save snapshot.db --endpoints=etcd1:2379 $ etcdctl snapshot save snapshot.db --endpoints=192.168.100.88:2379 # View that the snapshot was successful $ etcdctl snapshot status snapshot.db --write-out=table +----------+----------+------------+------------+ | HASH | REVISION | TOTAL KEYS | TOTAL SIZE | +----------+----------+------------+------------+ | b89543b8 | 40881777 | 54340 | 106 MB | +----------+----------+------------+------------+ |
Now we will use the snapshot backup to restore etcd as shown below (if you want to use a specific data directory for the restore, you can add the location using the --data-dir
flag). The restore
command generates the member
directory, which will be pasted into the path where the etcd node data are stored (the default path is /var/lib/etcd/
).
Code Block | ||||
---|---|---|---|---|
| ||||
# The destination directory must be empty and obviously have write permissions
$ etcdctl [--data-dir <data_dir>] snapshot restore <snapshot>
$ etcdctl --data-dir /tmp/snap_dir snapshot restore snapshot.db
# Paste the snapshot into the path where the etcd node data are stored
$ cp -r /tmp/snap_dir/member /var/lib/etcd/ |