Kubespray (henceforth KS) is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
- a highly available cluster;
- composable attributes;
- support for most popular Linux distributions (Ubuntu, CentOS, Fedora, ecc.).
Creating a cluster
Before using KS, some preliminary steps are required. Obviously the first thing is to create on OpenStack the VMs (you can automate this with TerraForm), that will be part of the cluster, and a VM, which we will call ServerAnsible (henceforth SA) and that it will have Ansible installed on it, from which to implement it. Communication via SSH from SA to other machines must be allowed. For example, you can create a key pair with the ssh-keygen
command, depositing the private part on the SA and the public part on the cluster VMs. It is advisable to perform at least one access test between the SA and the other VMs, both for a connection test and to automatically register the VMs in the $HOME/.ssh/known_hosts
file.
Now we are ready to clone the repository from GitHub to the SA
Now let's run the following commands
Before launching KS, it is recommended that you take a look at the files mentioned above, which contain various parameters to customize the cluster. However, we will talk about these files in more detail in the next sub-chapter.
Info
Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user’s permissions. The become
keyword leverages existing privilege escalation tools like sudo. The --become-user=root
flag can be omitted because the default is root.
Building your own inventory
As seen above, KS provides the python inventory.py
file to automatically create an inventory file. Typically, Ansible inventory can be stored in 3 formats (YAML, JSON or INI), in this case the hosts.yaml
file is generated. In the path inventory/mycluster
you will already find an inventory.ini
file, which you can manually edit and use later in the various playbooks. The generated file has a structure similar to the following
It is divided into 2 parts: the first lists the IPs of the hosts, the second shows the role that each of them will have to assume within the k8s cluster. The second part, in turn, is composed of 3 groups:
- kube-node : list of kubernetes nodes where the pods will run;
- kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run;
- etcd: list of servers to compose the etcd server (you should have at least 3 servers for failover purpose).
When kube-node
contains etcd
, you define your etcd cluster to be as well schedulable for Kubernetes workloads. If you want it a standalone, make sure those groups do not intersect. If you want the server to act both as master and node, the server must be defined on both groups kube-master
and kube-node
. If you want a standalone and unschedulable master, the server must be defined only in the kube-master
and not kube-node
.