Terraform (henceforth TF) is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. For this page we refer to this GitHub project.

Requirements

All the files and folders used here come from the same GitHub repository cloned in the KubeSpray chapter. That said, the necessary prerequisites are as follows:

  • Install TF; 
  • Install Ansible;
  • you already have a floating IP pool created;
  • you have security groups enabled;
  • you have a pair of keys generated that can be used to secure the new hosts.

Configuration

Inventory files

Create a directory for your cluster, mycluster for istance, by copying the existing sample-inventory and linking the hosts script, used to build the inventory based on TF state (this will be the base for subsequent Terraform commands).

Inventory files
# The following commands must be launched from inside the "kubespray" folder.
$ cp -LRp contrib/terraform/openstack/sample-inventory inventory/mycluster
$ cd inventory/mycluster
$ ln -s ../../contrib/terraform/openstack/hosts
$ ln -s ../../contrib

OpenStack access and credentials

TF supports various authentication methods for OpenStack. The recommended authentication method is to describe credentials in a YAML file clouds.yamlthat can be stored in the current directory (i.e. mycluster)

clouds.yaml
# Find the "auth_url" and "project_id" parameters in the OpenStack "Project/API Access" tab
clouds:
  openstack:
    auth:
      auth_url: https://cloud-api-pub.cr.cnaf.infn.it:5000/v3/
      username: "yourUser"
      project_name: "yourProject"
      project_id: d2b42ee4145849819b41a9d8794a111d
      user_domain_id: "default"
      password: "yourP4ssw0rd"
    region_name: "sdds"
    interface: "public"
    identity_api_version: 3

If you have multiple clouds defined in your clouds.yaml file, you can choose the one you want to use with the environment variable OS_CLOUD.

environment variable OS_CLOUD
# Insert this line in the .bashrc file for variable persistence
export OS_CLOUD=openstack
# To apply the changes
$ source .bashrc

Cluster variables

The construction of the cluster is driven by values found in../../contrib/terraform/openstack/variables.tf. You can consult this file to find out which variables are available for configuration, accompanied by a brief description, which values they accept and their defaults. For your cluster, edit cluster.tfvars. Let's take a look at some parameters in this file, which can serve as an example

cluster.tfvars
# list of availability zones available in your OpenStack cluster
az_list = ["nova"]

# SSH key to use for access to nodes
public_key_path = "/home/centos/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
image = "centos-8-CNAF-x86_64"

# standalone etcds
number_of_etcd = 0
#flavor_etcd = "23e53bd6-be2f-4802-8126-3d7b367468f0"		#m1.training

# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "23e53bd6-be2f-4802-8126-3d7b367468f0"	#m1.training
#flavor_k8s_master = "3d022f94-02a0-46d9-a41a-0ee2aa95d5f1" #m1.medium
#flavor_k8s_master = "5037b00e-2917-449e-b40b-4e41c8dfea07"	#m1.large
#flavor_k8s_master = "f322e78e-5ba7-4a00-ba31-fc025518a782"	#m1.xlarge

# nodes
number_of_k8s_nodes = 3
number_of_k8s_nodes_no_floating_ip = 0
flavor_k8s_node = "23e53bd6-be2f-4802-8126-3d7b367468f0"	#m1.training

# networking
router_id = "dec41a75-f1e1-4ec2-8c6a-bd87eb283bcc"		# It is possible to use an existing router instead of creating one
network_name = "network-02"
external_net = "ac57f9a8-4349-4185-8d66-341d1b30a1bd"
subnet_cidr = "192.168.102.0/24"
floatingip_pool = "public"
master_allowed_remote_ips = ["192.168.0.0/16"]
k8s_allowed_remote_ips = ["192.168.0.0/16"]

Note that the Ansible script will report an invalid configuration if you wind up with an even number of etcd instances since that is not a valid configuration. This restriction includes standalone etcd nodes that are deployed in a cluster along with master nodes with etcd replicas. As an example, if you have three master nodes with etcd replicas and three standalone etcd nodes, the script will fail since there are now six total etcd replicas. 

Provisioning VMs

Initialization

Before TF can operate on your cluster you need to install the required plugins. This is accomplished as follows (this should finish fairly quickly telling you TF has successfully initialized and loaded necessary modules)

Initialization
# Launch from the path inventory/mycluster
$ terraform init ../../contrib/terraform/openstack

Provisioning cluster

You can apply the Terraform configuration to your cluster with the following command issued from the usual path. The same command can be used to apply changes to an existing cluster after modifying the cluster.tfvars configuration file.

Provisioning cluster
$ terraform apply -var-file=cluster.tfvars ../../contrib/terraform/openstack

Bastion host

If you chose to create a bastion host, this script will create ../../contrib/terraform/openstack/k8s-cluster.yml with an ssh command for Ansible to be able to access your machines tunneling through the bastion's IP address. If you want to manually handle the ssh tunneling to these machines, please delete or move that file. If you want to use this, just leave it there, as ansible will pick it up automatically.

Destroying cluster

You can destroy your new cluster with the following command

Destroying cluster
$ terraform destroy -var-file=cluster.tfvars ../../contrib/terraform/openstack

Debugging

You can enable debugging output from Terraform by setting OS_DEBUG to 1 and TF_LOG to DEBUG before running the Terraform command.

  • No labels