In this chapter we will integrate the Input with the Load Balancer (henceforth LB). The first, as we have just seen in the previous chapter, was presented as an entity that lives within the Kubernetes cluster and comes in the form of a Pod. The second, presented a few pages ago, is an entity that lives outside the cluster and is hosted by the Cloud Provider in use (OpenStack in our case). We strongly recommend a review of these two objects before proceeding with the reading, consulting the pages linked above or through other sources.
We will then explain how to create a connection point between these two components. The joining point consists of a file, containing the access credentials (username, password) and the coordinates (projectID, region) of the OpenStack tenant that we want to link. The purpose, in fact, is to automatically create the LB and its components (Listener, Pool, Policy, etc.), starting from the Kubernetes cluster. Finally, we point out that this page is based on a GitHub guide, which you can reach from here.
Requirements
Some premises are needed before starting the deployment:
...
Let's create and move inside the following folder, which will encapsulate the files we will use in this guide. We will create the various components under the kube-system
namespace, but you are free to use another one, of course.
Code Block |
---|
language | bash |
---|
title | Create directory |
---|
|
$ mkdir -p /etc/kubernetes/octavia-ingress-controller
$ cd /etc/kubernetes/octavia-ingress-controller |
Deploy octavia-ingress-controller in the Kubernetes cluster
Create service account and grant permissions
For testing purpose, we grant the cluster admin role to the serviceaccount created. Save the file and proceed with apply.To set a default namespace, avoiding entering it in the CLI every time, use the following command
Code Block |
---|
language | ymlbash |
---|
title | Grant permissions |
---|
collapse | true |
---|
|
kind: ServiceAccount
apiVersion: v1
metadata:
name: octavia-ingress-controller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: octavia-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: octavia-ingress-controller
namespace: kube-system |
Prepare octavia-ingress-controller configuration
As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in openstack section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name needs to be unique.
|
$ kubectl config set-context --current --namespace=<namespace>
# Verify the change
$ kubectl config view --minify | grep namespace: |
Deploy octavia-ingress-controller in the Kubernetes cluster
Create service account and grant permissions
For testing purpose, we grant the cluster admin role to the serviceaccount created. Save the file and proceed with apply
.
Code Block |
---|
language | yml |
---|
title | Configurationserviceaccount.yaml |
---|
collapse | true |
---|
|
kind: ConfigMapServiceAccount
apiVersion: v1
metadata:
name: octavia-ingress-controller-config
namespace: kube-system
data:
config---
kind: |
cluster-name: <cluster_name>ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
openstackname: octavia-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: #ClusterRole
domain-name: <domain_name> # Choose between domain-name or domain-id (do not use together)
domain-id: <domain_id>cluster-admin
subjects:
- kind: ServiceAccount
name: octavia-ingress-controller
username: <username>
# user-id: <user_id> # Choose between user-id or username (do not use together)
password: <user_id>
project-id: <project_id>
auth-url: <auth_url>
region: <region>
octavia:
subnet-id: <subnet_id>
floating-network-id: <public_net_id>
manage-security-groups: <boolean_value> # If true, creates automatically SecurityGroup |
Info |
---|
|
It's advisable to create a service account associated to your project, if the is shared with other users, and use the credentials of this account. To get a service account you need to ask the Cloud@CNAF administrators. However, for testing purposes, for the moment you can use your personal credentials (username/password). |
Let's see how to fill the placeholders in the configuration file, going to retrieve the information from the Horizon dashboard:
- domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
- project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
- region: the region is present at the top left, next to the OpenStack logo.
- subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).
- manage-security-group: for the moment we insert false (default value). Later we will explain what this key is for.
Deploy octavia-ingress-controller
Info |
---|
title | Info: StatefulSet vs Deployment |
---|
|
StatefulSet is the workload API object used to manage stateful applications. Like a Deployment (preferred in stateless applications), a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. |
We will deploy octavia-ingress-controller as a StatefulSet (with only one pod), due to the presence of shared volumes. Apply the .yaml file and wait until the Pod is up and running.
Prepare octavia-ingress-controller configuration
As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress Resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in openstack
section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name
needs to be unique. Once you have filled in the fields appropriately (see below how), run the apply
here too.
Code Block |
---|
language | yml |
---|
title | config.yaml |
---|
collapse | true |
---|
|
kind: ConfigMap
apiVersion: v1
metadata:
name: octavia-ingress-controller-config
namespace: kube-system
data:
config: |
cluster-name: <cluster_name>
openstack:
# domain-name: <domain_name> # Choose between domain-name or domain-id (do not use together)
domain-id: <domain_id>
username: <username>
# user-id: <user_id> # Choose between user-id or username (do not use together)
password: <password>
project-id: <project_id>
auth-url: <auth_url>
region: <region>
octavia:
subnet-id: <subnet_id>
floating-network-id: <public_net_id>
manage-security-groups: <boolean_value> # If true, creates automatically SecurityGroup |
Info |
---|
|
It's advisable to create a service account associated to your project, if it is shared with other users, and use the credentials of this account. To get a service account you need to ask the Cloud@CNAF administrators. However, for testing purposes, for the moment you can use your personal credentials (username/password). |
Let's see how to fill the placeholders in the configuration file, going to retrieve the information from the Horizon dashboard:
- domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
- project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
- region: the region is present at the top left, next to the OpenStack logo.
- subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).
- manage-security-group: for the moment we insert false (default value). Later we will explain what this key is for.
Deploy octavia-ingress-controller
Info |
---|
title | Info: StatefulSet vs Deployment |
---|
|
StatefulSet is the workload API object used to manage stateful applications. Like a Deployment (preferred in stateless applications), a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. |
We will deploy octavia-ingress-controller as a StatefulSet (with only one pod), due to the presence of shared volumes. Apply the .yaml
file and wait until the Pod is up and running.
Code Block |
---|
language | yml |
---|
title | deployment.yaml |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name |
Code Block |
---|
language | yml |
---|
title | Deploy Controller |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: octavia-ingress-controller
namespace: kube-system
labels:
k8s-app: octavia-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: octavia-ingress-controller
serviceName: octavia-ingress-controller
template:
metadata:
labels:
k8s-app: octavia-ingress-controller
spec:
serviceAccountName: octavia-ingress-controller
namespace: kube-system
tolerationslabels:
- effect: NoSchedule # Make sure the pod can be scheduled on master kubelet.k8s-app: octavia-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: octavia-ingress-controller
operator: ExistsserviceName: octavia-ingress-controller
template:
metadata:
- keylabels:
CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling.k8s-app: octavia-ingress-controller
spec:
operatorserviceAccountName: Existsoctavia-ingress-controller
tolerations:
- effect: NoExecute
- effect: NoSchedule # Make sure operator:the Exists
pod can be scheduled on master containers:kubelet.
- nameoperator: octavia-ingress-controllerExists
- imagekey: docker.io/k8scloudprovider/octavia-ingress-controller:latest
imagePullPolicy: IfNotPresentCriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling.
operator: Exists
- effect: NoExecute
argsoperator: Exists
containers:
- /bin/name: octavia-ingress-controller
- --config=/etc/configimage: docker.io/k8scloudprovider/octavia-ingress-controller-config.yaml:latest
volumeMountsimagePullPolicy: IfNotPresent
- mountPath: /etc/kubernetesargs:
name: kubernetes-config- /bin/octavia-ingress-controller
readOnly: true- --config=/etc/config/octavia-ingress-controller-config.yaml
- name: ingress-configvolumeMounts:
- mountPath: /etc/configkubernetes
hostNetwork: true
volumesname: kubernetes-config
- namereadOnly: kubernetes-configtrue
hostPath:
- name: ingress-config
path mountPath: /etc/kubernetesconfig
hostNetwork: true
typevolumes: Directory
- name: ingresskubernetes-config
configMaphostPath:
namepath: octavia-ingress-controller-config/etc/kubernetes
itemstype:
Directory
- name: ingress-config
- key: config
configMap:
pathname: octavia-ingress-controller-config.yaml
items:
- key: config
path: octavia-ingress-controller-config.yaml |
If the Pod does not assume the desired state, investigate the problem with (according to the StatefulSet naming If the Pod does not assume the desired state, investigate the problem with (according to the StatefulSet naming convention, the Pods will be named <name>-0, <name>-1, <name>-2, etc., depending on the number of replicas)
Code Block |
---|
language | yml |
---|
title | Get more details |
---|
|
$ kubectl describe pod/octavia-ingress-controller-0
$ kubectl logs pod/octavia-ingress-controller-0 |
Setting up HTTP Load Balancing with Ingress
Create a backend service
Create a simple web services (of type NodePort), analogous to those encountered in the previous chapter but of type NodePort, that are listening on a HTTP server on port 80. When you create a Service of type NodePort
, Kubernetes makes your Service available on a randomly selected high port number (in the range 30000-32767) on all the nodes in your cluster.randomly selected high port number (in the range 30000-32767) on all the nodes in your cluster.
Code Block |
---|
language | yml |
---|
title | Example deployment |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/nginx-hello:plain-text
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app |
Code Block |
---|
language | yml |
---|
title | Example deployment |
---|
collapse | true |
---|
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffee
spec:
replicasports:
2
- selectorport: 80
matchLabelstargetPort: 8080
appprotocol: coffeeTCP
template:
name: metadata:http
labelsselector:
app: coffee
type: spec:
containers:
- name: coffee
image: nginxdemos/nginx-hello:plain-textNodePort # <--- Pay attention
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tea
spec:
replicas: 3
selector:
portsmatchLabels:
app: tea
- containerPorttemplate: 8080
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffeetea
spec:
ports containers:
- portname: 80tea
targetPort: 8080
protocol: TCP
image: nginxdemos/nginx-hello:plain-text
name: http
selectorports:
app: coffee
type: NodePort # <--- PaycontainerPort: attention8080
---
apiVersion: apps/v1
kind: DeploymentService
metadata:
name: tea-svc
labels:
app: tea
spec:
replicasports:
3
- selectorport: 80
matchLabelstargetPort: 8080
protocol: TCP
app name: teahttp
templateselector:
metadataapp: tea
type: NodePort # <--- labels:
Pay attention |
For a quick check, we can contact, for example with a curl
, the services at the address in the CLUSTER-IP
column. Note the system-generated values for the NodePorts of the two services (in this case 31156 and 30458).
Code Block |
---|
language | bash |
---|
title | Verify Service |
---|
|
$ kubectl get svc
NAME app: tea
TYPE spec:
CLUSTER-IP containers:
EXTERNAL-IP - name: tea
PORT(S) AGE
coffee-svc image: nginxdemos/nginx-hello:plain-text
NodePort 10.110.66.194 <none> ports:
80:31156/TCP - containerPort: 80803d1h
tea---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
labels:
app: tea
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: tea
type: NodePort # <--- Pay attention |
We could verify the service using its CLUSTER-IP on Kubernetes master node. Note the system-generated values for the NodePorts of the two services (in this case 31156 and 30458).
Code Block |
---|
language | bash |
---|
title | Verify Service |
---|
|
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-svc NodePort 10.110.66.194 <none> 80:31156/TCP 3d1h
tea-svc NodePort 10.96.32.111 <none> 80:30458/TCP 3d1h
# Verify that the service is working
$ curl 10.110.66.194
Server address: 172.16.231.221:8080
Server name: coffee-6f4b79b975-v7cv2
Date: 30/Oct/2020:16:33:20 +0000
URI: /
Request ID: 8d870888961431bf04dd2305d614004f |
Create an Ingress resource
Now we create an Ingress resource, to make your HTTP web server application publicly accessible. The following command defines an Ingress resource that forwards traffic that requests http://webserver-bar.com to the webserver
svc NodePort 10.96.32.111 <none> 80:30458/TCP 3d1h
# Verify that the service is working
$ curl 10.110.66.194
Server address: 172.16.231.221:8080
Server name: coffee-6f4b79b975-v7cv2
Date: 30/Oct/2020:16:33:20 +0000
URI: /
Request ID: 8d870888961431bf04dd2305d614004f |
Create an Ingress Resource
Now we create an Ingress Resource, to make your HTTP web server application publicly accessible. The following lines defines an Ingress Resource that forwards traffic that requests http://webserver-bar.com to the webserver
Code Block |
---|
language | yml |
---|
title | Configure Ingress resource |
---|
collapse | true |
---|
|
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb
annotations:
kubernetes.io/ingress.class: "openstack"
octavia.ingress.kubernetes.io/internal: "false" # Set true, if you don't want your Ingress to be accessible from the public internet
spec:
rules:
- host: webserver-bar.com
http:
paths:
- path: /tea # Use webserver-bar.com/tea to target "tea" services
pathType: Prefix # This field is mandatory. Other values are "ImplementationSpecific" and "Exact"
backend:
service:
name: tea-svc # Enter the service name
port:
number: 80 # Enter the port number on which the service is listening |
Code Block |
---|
language | yml |
---|
title | Configure Ingress resource |
---|
collapse | true |
---|
|
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb
annotations:
kubernetes.io/ingress.class: "openstack"
octavia.ingress.kubernetes.io/internal: "false" # Set true, if you don't want your Ingress to be accessible from the public internet
spec:
rules:
- host: webserver-bar.com
http:
paths:
- path: /tea coffee # Use webserver-bar.com/teacoffee to target "teacoffee" services
pathType: Prefix # This field is mandatory. Other values are "ImplementationSpecific" and "Exact"
backend:
service:
name: teacoffee-svc # Enter the service name
port:
number: 80 # Enter the port number on which the service is listening
- path: /coffee # Use webserver-bar.com/coffee to target "coffee" services
pathType: Prefix # This field is mandatory. Other values are "ImplementationSpecific" and "Exact"
backend:
service:
# Enter the port number on which the service is listening |
Warning |
---|
title | Ingress and dashboard namespace |
---|
|
The application and the Ingress Resource must belong to the same namespace, otherwise the latter will not be able to "see" the backend service. If, on the other hand, you are feeling brave, you can experiment by implementing services of type ExternalName (reference). |
Apply it and verify that Ingress Resource has been created. Please note that the IP address will not be defined right away (wait for the ADDRESS
field to get populated). It is possible to follow the implementation of the LB step by step, from the creation of its components (Listener, Pool, Policy) to the assignment of the FIP, from the log of the Pod of the Ingress Controller (the whole operation can take a few minutes).
Code Block |
---|
language | bash |
---|
title | Ingress Resource |
---|
|
$ kubectl get ing
NAME CLASS HOSTS name: coffee-svc # Enter the service name
ADDRESS port:
PORTS AGE
lb <none> webserver-bar.com number: 80 # Enter the port number on which the service is listening |
...
Using a browser or the curl
command, you should be able to access the backend service by sending HTTP request to the domain name specified in the Ingress Resource (remember to hook the hostname to the FIP in the /etc/hosts
of the machine from which the request to the service starts)
Code Block |
---|
language | bash |
---|
title | Ingress Resource |
---|
|
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
lb <none> webserver-bar.com 131.154.97.200 80 3d1h |
Go to OpenStack and check that the LB has been created in the Project/Network/LoadBalancer tab. Using a browser or the curl command, you should be able to access the backend service by sending HTTP request to the domain name specified in the Ingress resource (remember to hook the hostname to the FIP in the /etc/hosts of the machine from which the request to the service starts)
...
language | bash |
---|
title | Connect to the service |
---|
...
|
$ curl webserver-bar.com/coffee
Server address: 172.16.94.81:8080
Server name: coffee-6f4b79b975-25jrn
Date: 30/Oct/2020:17:34:39 +0000
URI: /coffee
Request ID: 448271f01b708f4bb1d92b31600be368
$ curl webserver-bar.com/tea
Server address: 172.16.141.44:8080
Server name: tea-6fb46d899f-47rtv
Date: 30/Oct/2020:17:34:46 +0000
URI: /tea
Request ID: 47f120d482f1236c4f5351b114d389a5 |
Let's move to OpenStack
Go to OpenStack and check that the LB has been created in the Project/Network/LoadBalancer tab. We navigate the LB to verify that its internal components were created and, in particular, how they were created. Let's analyze, by way of example, the L7 Rule that was automatically generated to reach the webserver-bar.com/tea address. Here we can get a taste of the convenience of using this approach. In addition to building and automatically configuring the components that make up the LB, this also generates the Policies and Rule on the basis of what is written in the configuration file of the Ingress Resource. This help is more appreciable the more complex the structure of your web server is.
Image AddedL7Rule
A speech remained pending. I'm talking about the ConfigMap manage-security-groups
key. If set to true, a new group would appear in the Project/Network/SecurityGroups tab, containing the NodePorts of the implemented services (in this case the ports 31156 and 30458 of the coffee and tea services). In this way, the group can be associated with cluster instances. Why did I recommend setting the value to false? Because it would be useless, since the ports are already open. In fact, when installing Kubeadm (see "Preliminary steps" in cap. 2), it is recommended to open the range of ports 30000-32767 for WorkerNodes, which corresponds to the range in which a NodePort can be generated.
...