...
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
# Install dependencies (any missing packages will be reported) $ sudo pip3 install -r requirements.txt # Copy the folder, so that you always have a default from which to start over $ cp -rfp inventory/sample inventory/mycluster # Create an array with the IPs of the cluster VMs $ declare -a IPS=(<IP_VM1> <IP_VM2> <IP_VM3>) # Automatically generates the possible cluster structure # Run python3 contrib/inventory_builder/inventory.py help for more information (more details below) $ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} # Review and change parameters $ cat inventory/mycluster/group_vars/all/all.yml $ cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml $ cat inventory/mycluster/group_vars/k8s-cluster/addons.yml # Deploy Kubespray with Ansible Playbook $ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml |
...
| Info | ||
|---|---|---|
| ||
Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user’s permissions. The |
Building your own inventory
As seen above, KS provides the python inventory.py file to automatically create an inventory file. Typically, Ansible inventory can be stored in 3 formats (YAML, JSON or INI), in this case the hosts.yaml file is generated. In the path inventory/mycluster you will already find an inventory.ini file, which you can manually edit and use later in the various playbooks. The generated file has a structure similar to the following
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
all:
hosts:
node1:
ansible_host: 192.168.100.1
ip: 192.168.100.1
access_ip: 192.168.100.1
node2:
ansible_host: 192.168.100.2
ip: 192.168.100.2
access_ip: 192.168.100.2
node3:
ansible_host: 192.168.100.3
ip: 192.168.100.23
access_ip: 192.168.100.23
node4:
ansible_host: 192.168.100.4
ip: 192.168.100.4
access_ip: 192.168.100.4
node5:
ansible_host: 192.168.100.5
ip: 192.168.100.5
access_ip: 192.168.100.5
node6:
ansible_host: 192.168.100.6
ip: 192.168.100.6
access_ip: 192.168.100.6
children:
kube-master:
hosts:
node1:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node4:
node5:
node6:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {} |
It is divided into 2 parts: the first lists the IPs of the hosts, the second shows the role that each of them will have to assume within the k8s cluster. The second part, in turn, is composed of 3 groups:
- kube-node : list of kubernetes nodes where the pods will run;
- kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run;
- etcd: list of servers to compose the etcd server (you should have at least 3 servers for failover purpose).
When kube-node contains etcd, you define your etcd cluster to be as well schedulable for Kubernetes workloads. If you want it a standalone, make sure those groups do not intersect. If you want the server to act both as master and node, the server must be defined on both groups kube-master and kube-node. If you want a standalone and unschedulable master, the server must be defined only in the kube-master and not kube-node.