Kubespray (henceforth KS) is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:

  • a highly available cluster;
  • composable attributes;
  • support for most popular Linux distributions (Ubuntu, CentOS, Fedora, ecc.).

Creating a cluster

Before using KS, some preliminary steps are required. Obviously the first thing is to create on OpenStack the VMs (you can automate this with TerraForm), that will be part of the cluster, and a VM, which we will call ServerAnsible (henceforth SA) and that it will have Ansible installed on it, from which to implement it. Communication via SSH from SA to other machines must be allowed. For example, you can create a key pair with the ssh-keygen command, depositing the private part on the SA and the public part on the cluster VMs. It is advisable to perform at least one access test between the SA and the other VMs, both for a connection test and to automatically register the VMs in the $HOME/.ssh/known_hosts file.

Now we are ready to clone the repository from GitHub to the SA

Clone repo
$ git clone https://github.com/kubernetes-sigs/kubespray.git
# After the download, enter the following folder
# (the locations of the other files that will be presented in the guide are relative to it)
$ cd kubespray

Now let's run the following commands

Deploy cluster
# Install dependencies (any missing packages will be reported)
$ sudo pip3 install -r requirements.txt
# Copy the folder, so that you always have a default from which to start over
$ cp -rfp inventory/sample inventory/mycluster

# Create an array with the IPs of the cluster VMs
$ declare -a IPS=(<IP_VM1> <IP_VM2> <IP_VM3>)
# Automatically generates the possible cluster structure
# Run "python3 contrib/inventory_builder/inventory.py help" for more information (more details below)
$ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# Review and change parameters
$ cat inventory/mycluster/group_vars/all/all.yml
$ cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
$ cat inventory/mycluster/group_vars/k8s-cluster/addons.yml
# Deploy Kubespray with Ansible Playbook (this may take 15-20 minutes)
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

Before launching KS, it is recommended that you take a look at the files mentioned above, which contain various parameters to customize the cluster. However, we will talk about these files in more detail in the next sub-chapter.

Info

Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user’s permissions. The become keyword leverages existing privilege escalation tools like sudo. The --become-user=root flag can be omitted because the default is root.

Building your own inventory

As seen above, KS provides the python inventory.py file to automatically create an inventory file. Typically, Ansible inventory can be stored in 3 formats (YAML, JSON or INI), in this case the hosts.yaml file is generated. In the path inventory/mycluster you will already find an inventory.ini file, which you can manually edit and use later in the various playbooks. The generated file has a structure similar to the following

hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.100.1
      ip: 192.168.100.1
      access_ip: 192.168.100.1
    node2:
      ansible_host: 192.168.100.2
      ip: 192.168.100.2
      access_ip: 192.168.100.2
    node3:
      ansible_host: 192.168.100.3
      ip: 192.168.100.23
      access_ip: 192.168.100.23
    node4:
      ansible_host: 192.168.100.4
      ip: 192.168.100.4
      access_ip: 192.168.100.4
    node5:
      ansible_host: 192.168.100.5
      ip: 192.168.100.5
      access_ip: 192.168.100.5
    node6:
      ansible_host: 192.168.100.6
      ip: 192.168.100.6
      access_ip: 192.168.100.6
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node4:
        node5:
        node6:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

It is divided into 2 parts: the first lists the IPs of the hosts, the second shows the role that each of them will have to assume within the k8s cluster. The second part, in turn, is composed of 3 groups:

  • kube-node : list of kubernetes nodes where the pods will run;
  • kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run;
  • etcd: list of servers to compose the etcd server (you should have at least 3 servers for failover purpose).

When kube-node contains etcd, you define your etcd cluster to be as well schedulable for Kubernetes workloads. If you want it a standalone, make sure those groups do not intersect. If you want the server to act both as master and node, the server must be defined on both groups kube-master and kube-node. If you want a standalone and unschedulable master, the server must be defined only in the kube-master and not kube-node.

  • No labels