Here we list a series of instructions to add or remove nodes from the cluster (for more info go here).
Remove nodes
You may want to remove master, worker, or etcd nodes from your existing cluster. This can be done by running the remove-node.yml
playbook. First, all specified nodes will be drained, then stop some kubernetes services and delete some certificates, and finally execute the kubectl command to delete these nodes. Use --extra-vars
flag to select the node(s) you want to delete
If the node you want to remove is not online, you should add reset_nodes=false
and allow_ungraceful_removal=true
to your extra-vars
. Once the deletion process is finished, remember to delete the node from the inventory file.
Cleanup
You can reset your nodes and wipe out all components installed with Kubespray via the reset.yml
playbook. In general, it is recommended that you have a look at the inventory file before launching a playbook. It is possible that there are nodes left inside it, but no longer part of the cluster because, for example, they were previously deleted. In other words, the nodes belonging to the cluster must match those present in the inventory.
Adding nodes
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the cluster.yml
playbook, after adding a new node in the inventory file.
Add worker
As you may have noticed by now, the use of the playbook takes a few minutes to complete the various operations. If you only need to add a worker node to the cluster, in order to save a few minutes, you can use a specific playbook for this task
To save further processing time, you can use --limit=<node_name>
flag to limit KS to avoid disturbing other nodes in the cluster. Before launching the playbook with this flag, it is advisable to launch the facts.yml
playbook to refresh facts cache for all nodes.
Add master
Append the new host to the inventory and run cluster.yml
(you can NOT use scale.yml
for that).