...
We will then explain how to create a connection point between these two components. The joining point consists of a file, containing the access credentials (username, password) and the coordinates (projectID, region) of the OpenStack tenant that we want to link. The purpose, in fact, is to automatically create the LB and its components (Listener, Pool, Policy, etc.), starting from the Kubernetes cluster. Finally, we point out that this page is based on a GitHub guide, which you can reach from here.
Requirements
Some premises are needed before starting the deployment:
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl config set-context --current --namespace=<namespace> # Validate it $ kubectl config view --minify | grep namespace: |
Deploy octavia-ingress-controller in the Kubernetes cluster
Create service account and grant permissions
For testing purpose, we grant the cluster admin role to the serviceaccount created. Save the file and proceed with apply
.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kind: ServiceAccount apiVersion: v1 metadata: name: octavia-ingress-controller namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: octavia-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: octavia-ingress-controller namespace: kube-system |
Prepare octavia-ingress-controller configuration
As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress Resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in openstack section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name needs to be unique. Once you have filled in the fields appropriately (see below how), run the apply
here too.
...
- domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
- project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
- region: the region is present at the top left, next to the OpenStack logo.
- subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).
- manage-security-group: for the moment we insert false (default value). Later we will explain what this key is for.
Deploy octavia-ingress-controller
Info | ||
---|---|---|
| ||
StatefulSet is the workload API object used to manage stateful applications. Like a Deployment (preferred in stateless applications), a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. |
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl describe pod/octavia-ingress-controller-0 $ kubectl logs pod/octavia-ingress-controller-0 |
Setting up HTTP Load Balancing with Ingress
Create a backend service
Create a simple web services (of type NodePort), analogous to those encountered in the previous chapter. When you create a Service of type NodePort
, Kubernetes makes your Service available on a randomly selected high port number (in the range 30000-32767) on all the nodes in your cluster.
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coffee-svc NodePort 10.110.66.194 <none> 80:31156/TCP 3d1h tea-svc NodePort 10.96.32.111 <none> 80:30458/TCP 3d1h # Verify that the service is working $ curl 10.110.66.194 Server address: 172.16.231.221:8080 Server name: coffee-6f4b79b975-v7cv2 Date: 30/Oct/2020:16:33:20 +0000 URI: / Request ID: 8d870888961431bf04dd2305d614004f |
Create an Ingress Resource
Now we create an Ingress Resource, to make your HTTP web server application publicly accessible. The following lines defines an Ingress Resource that forwards traffic that requests http://webserver-bar.com to the webserver
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ curl webserver-bar.com/coffee Server address: 172.16.94.81:8080 Server name: coffee-6f4b79b975-25jrn Date: 30/Oct/2020:17:34:39 +0000 URI: /coffee Request ID: 448271f01b708f4bb1d92b31600be368 $ curl webserver-bar.com/tea Server address: 172.16.141.44:8080 Server name: tea-6fb46d899f-47rtv Date: 30/Oct/2020:17:34:46 +0000 URI: /tea Request ID: 47f120d482f1236c4f5351b114d389a5 |
Let's move to OpenStack
Go to OpenStack and check that the LB has been created in the Project/Network/LoadBalancer tab. We navigate the LB to verify that its internal components were created and, in particular, how they were created. Let's analyze, by way of example, the L7 Rule that was automatically generated to reach the webserver-bar.com/tea address. Here we can get a taste of the convenience of using this approach. In addition to building and automatically configuring the components that make up the LB, this also generates the Policies and Rule on the basis of what is written in the configuration file of the Ingress Resource. This help is more appreciable the more complex the structure of your web server is.
A speech remained pending. I'm talking about the ConfigMap manage-security-groups
key. If set to true, a new group would appear in the Project/Network/SecurityGroups tab, containing the NodePorts of the implemented services (in this case the ports 31156 and 30458 of the coffee and tea services). In this way, the group can be associated with cluster instances. Why did I recommend setting the value to false? Because it would be useless, since the ports are already open. In fact, when installing Kubeadm (see "Preliminary steps" in cap. 2), it is recommended to open the range of ports 30000-32767 for WorkerNodes, which corresponds to the range in which a NodePort can be generated.