In this chapter we will integrate the Input with the Load Balancer (henceforth LB). The first, as we have just seen in the previous chapter, was presented as an entity that lives within the Kubernetes cluster and comes in the form of a Pod. The second, presented a few pages ago, is an entity that lives outside the cluster and is hosted by the Cloud Provider in use (OpenStack in our case). We strongly recommend a review of these two objects before proceeding with the reading, consulting the pages linked above or through other sources.
We will then explain how to create a connection point between these two components. The joining point consists of a file, containing the access credentials (username, password) and the coordinates (projectID, region) of the OpenStack tenant that we want to link. The purpose, in fact, is to automatically create the LB and its components (Listener, Pool, Policy, etc.), starting from the Kubernetes cluster. Finally, we point out that this page is based on a GitHub guide, which you can reach from here.
Deploy octavia-ingress-controller in the Kubernetes cluster
First, let's create and move inside the following folder, which will encapsulate the files we will use in this guide. We will create the various components under the kube-system namespace, but you are free to use another one, of course.
Code Block |
---|
language | bash |
---|
title | Create directory |
---|
|
$ mkdir -p /etc/kubernetes/octavia-ingress-controller
$ cd /etc/kubernetes/octavia-ingress-controller |
Create service account and grant permissions
For testing purpose, we grant the cluster admin role to the serviceaccount created. Save the file and proceed with apply.
Code Block |
---|
language | yml |
---|
title | Grant permissions |
---|
collapse | true |
---|
|
kind: ServiceAccount
apiVersion: v1
metadata:
name: octavia-ingress-controller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: octavia-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: octavia-ingress-controller
namespace: kube-system |
Prepare octavia-ingress-controller configuration
As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in openstack section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name needs to be unique.
Code Block |
---|
language | yml |
---|
title | Configuration |
---|
collapse | true |
---|
|
kind: ConfigMap
apiVersion: v1
metadata:
name: octavia-ingress-controller-config
namespace: kube-system
data:
config: |
cluster-name: <cluster_name>
openstack:
# domain-name: <domain_name> # Choose between domain-name or domain-id (do not use together)
domain-id: <domain_id>
username: <username>
# user-id: <user_id> # Choose between user-id or username (do not use together)
password: <user_id>
project-id: <project_id>
auth-url: <auth_url>
region: <region>
octavia:
subnet-id: <subnet_id>
floating-network-id: <public_net_id>
manage-security-groups: <boolean_value> # If true, creates automatically SecurityGroup |
Info |
---|
|
It's advisable to create a service account associated to your project, if the is shared with other users, and use the credentials of this account. To get a service account you need to ask the Cloud@CNAF administrators. However, for testing purposes, for the moment you can use your personal credentials (username/password). |
Let's see how to fill the placeholders in the configuration file, going to retrieve the information from the Horizon dashboard:
- domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
- project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
- region: the region is present at the top left, next to the OpenStack logo.
- subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).
- manage-security-group: for the moment we insert false (default value). Later we will explain what this key is for.
Deploy octavia-ingress-controller
Info |
---|
title | Info: StatefulSet vs Deployment |
---|
|
StatefulSet is the workload API object used to manage stateful applications. Like a Deployment (preferred in stateless applications), a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. |
We will deploy octavia-ingress-controller as a StatefulSet (with only one pod), due to the presence of shared volumes. Apply the .yaml file and wait until the Pod is up and running.
Code Block |
---|
language | yml |
---|
title | Deploy Controller |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: octavia-ingress-controller
namespace: kube-system
labels:
k8s-app: octavia-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: octavia-ingress-controller
serviceName: octavia-ingress-controller
template:
metadata:
labels:
k8s-app: octavia-ingress-controller
spec:
serviceAccountName: octavia-ingress-controller
tolerations:
- effect: NoSchedule # Make sure the pod can be scheduled on master kubelet.
operator: Exists
- key: CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling.
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: octavia-ingress-controller
image: docker.io/k8scloudprovider/octavia-ingress-controller:latest
imagePullPolicy: IfNotPresent
args:
- /bin/octavia-ingress-controller
- --config=/etc/config/octavia-ingress-controller-config.yaml
volumeMounts:
- mountPath: /etc/kubernetes
name: kubernetes-config
readOnly: true
- name: ingress-config
mountPath: /etc/config
hostNetwork: true
volumes:
- name: kubernetes-config
hostPath:
path: /etc/kubernetes
type: Directory
- name: ingress-config
configMap:
name: octavia-ingress-controller-config
items:
- key: config
path: octavia-ingress-controller-config.yaml |
If the Pod does not assume the desired state, investigate the problem with (according to the StatefulSet naming convention, the Pods will be named <name>-0, <name>-1, <name>-2, etc., depending on the number of replicas)
Code Block |
---|
language | yml |
---|
title | Get more details |
---|
|
$ kubectl describe pod/octavia-ingress-controller-0
$ kubectl logs pod/octavia-ingress-controller-0 |
Setting up HTTP Load Balancing with Ingress
Create a backend service
Create a simple web services, analogous to those encountered in the previous chapter but of type NodePort, that are listening on a HTTP server on port 80. When you create a Service of type NodePort, Kubernetes makes your Service available on a randomly selected high port number (in the range 30000-32767) on all the nodes in your cluster.
Code Block |
---|
language | yml |
---|
title | Example deployment |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: coffee
type: NodePort # <--- Pay attention
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
labels:
app: tea
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: tea
type: NodePort # <--- Pay attention |
We could verify the service using its CLUSTER-IP on Kubernetes master node
Code Block |
---|
language | bash |
---|
title | Verify Service |
---|
|
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-svc NodePort 10.110.66.194 <none> 80:31156/TCP 3d1h
tea-svc NodePort 10.96.32.111 <none> 80:30458/TCP 3d1h
# Verify that the service is working
$ curl 10.110.66.194
Server address: 172.16.231.221:8080
Server name: coffee-6f4b79b975-v7cv2
Date: 30/Oct/2020:16:33:20 +0000
URI: /
Request ID: 8d870888961431bf04dd2305d614004f |
Create an Ingress resource
Now we create an Ingress resource, to make your HTTP web server application publicly accessible. The following command defines an Ingress resource that forwards traffic that requests http://webserver-bar.com to the webserver
Code Block |
---|
language | yml |
---|
title | Configure Ingress resource |
---|
collapse | true |
---|
|
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb
annotations:
kubernetes.io/ingress.class: "openstack"
octavia.ingress.kubernetes.io/internal: "false" # Set true, if you don't want your Ingress to be accessible from the public internet
spec:
rules:
- host: webserver-bar.com
http:
paths:
- path: /tea
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: Prefix
backend:
service:
name: coffee-svc
port:
number: 80 |
Verify that Ingress Resource has been created. Please note that the IP address will not be defined right away (wait for the ADDRESS field to get populated). It is possible to follow the implementation of the LB step by step, from the creation of its components (Listener, Pool, Policy) to the assignment of the FIP, from the log of the Pod of the Ingress Controller (the whole operation can take a few minutes).
Code Block |
---|
language | bash |
---|
title | Ingress Resource |
---|
|
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
lb <none> webserver-bar.com 131.154.97.200 80 3d1h |
Go to OpenStack and check that the LB has been created in the Project/Network/LoadBalancer tab. Using a browser or the curl command, you should be able to access the backend service by sending HTTP request to the domain name specified in the Ingress resource (remember to hook the hostname to the FIP in the /etc/hosts of the machine from which the request to the service starts)
Code Block |
---|
language | bash |
---|
title | Connect to the service |
---|
|
$ curl webserver-bar.com/coffee
Server address: 172.16.94.81:8080
Server name: coffee-6f4b79b975-25jrn
Date: 30/Oct/2020:17:34:39 +0000
URI: /coffee
Request ID: 448271f01b708f4bb1d92b31600be368
$ curl webserver-bar.com/tea
Server address: 172.16.141.44:8080
Server name: tea-6fb46d899f-47rtv
Date: 30/Oct/2020:17:34:46 +0000
URI: /tea
Request ID: 47f120d482f1236c4f5351b114d389a5 |