You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

In this chapter we will integrate the Input with the Load Balancer (henceforth LB). The first, as we have just seen in the previous chapter, was presented as an entity that lives within the Kubernetes cluster and comes in the form of a Pod. The second, presented a few pages ago, is an entity that lives outside the cluster and is hosted by the Cloud Provider in use (OpenStack in our case). We strongly recommend a review of these two objects before proceeding with the reading, consulting the pages linked above or through other sources. 

We will then explain how to create a connection point between these two components. The joining point consists of a file, containing the access credentials (username, password) and the coordinates (projectID, region) of the OpenStack tenant that we want to link. The purpose, in fact, is to automatically create the LB and its components (Listener, Pool, Policy, etc.), starting from the Kubernetes cluster. Finally, we point out that this page is based on a GitHub guide, which you can reach from here.

Deploy octavia-ingress-controller in the Kubernetes cluster

First, let's create and move inside the following folder, which will encapsulate the files we will use in this guide. We will create the various components under the kube-system namespace, but you are free to use another one, of course.

Create directory
$ mkdir -p /etc/kubernetes/octavia-ingress-controller
$ cd /etc/kubernetes/octavia-ingress-controller

Create service account and grant permissions

For testing purpose, we grant the cluster admin role to the serviceaccount created. Save the file and proceed with apply.

Grant permissions
kind: ServiceAccount
apiVersion: v1
metadata:
  name: octavia-ingress-controller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: octavia-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: octavia-ingress-controller
    namespace: kube-system

Prepare octavia-ingress-controller configuration

As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in openstack section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name needs to be unique.

Configuration
kind: ConfigMap
apiVersion: v1
metadata:
  name: octavia-ingress-controller-config
  namespace: kube-system
data:
  config: |
    cluster-name: <cluster_name>
    openstack:
      # domain-name: <domain_name>	# Choose between domain-name or domain-id (do not use together)
      domain-id: <domain_id>
      username: <username>
      # user-id: <user_id>			# Choose between user-id or username (do not use together)
      password: <user_id>
      project-id: <project_id>
	  auth-url: <auth_url>
      region: <region>
    octavia:
      subnet-id: <subnet_id>
      floating-network-id: <public_net_id>
      manage-security-groups: <boolean_value> # If true, creates automatically SecurityGroup

Advice

It's advisable to create a service account associated to your project, if the is shared with other users, and use the credentials of this account. To get a service account you need to ask the Cloud@CNAF administrators. However, for testing purposes, for the moment you can use your personal credentials (username/password).

Let's see how to fill the placeholders in the configuration file, going to retrieve the information from the Horizon dashboard:

  • domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
  • project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
  • region: the region is present at the top left, next to the OpenStack logo.
  • subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).
  • manage-security-group: for the moment we insert false (default value). Later we will explain what this key is for.

Deploy octavia-ingress-controller

Info: StatefulSet vs Deployment

StatefulSet is the workload API object used to manage stateful applications. Like a Deployment (preferred in stateless applications), a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.

We will deploy octavia-ingress-controller as a StatefulSet (with only one pod), due to the presence of shared volumes. Apply the .yaml file and wait until the Pod is up and running.

Deploy Controller
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: octavia-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: octavia-ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: octavia-ingress-controller
  serviceName: octavia-ingress-controller
  template:
    metadata:
      labels:
        k8s-app: octavia-ingress-controller
    spec:
      serviceAccountName: octavia-ingress-controller
      tolerations:
        - effect: NoSchedule # Make sure the pod can be scheduled on master kubelet.
          operator: Exists
        - key: CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling.
          operator: Exists
        - effect: NoExecute
          operator: Exists
      containers:
        - name: octavia-ingress-controller
          image: docker.io/k8scloudprovider/octavia-ingress-controller:latest
          imagePullPolicy: IfNotPresent
          args:
            - /bin/octavia-ingress-controller
            - --config=/etc/config/octavia-ingress-controller-config.yaml
          volumeMounts:
            - mountPath: /etc/kubernetes
              name: kubernetes-config
              readOnly: true
            - name: ingress-config
              mountPath: /etc/config
      hostNetwork: true
      volumes:
        - name: kubernetes-config
          hostPath:
            path: /etc/kubernetes
            type: Directory
        - name: ingress-config
          configMap:
            name: octavia-ingress-controller-config
            items:
              - key: config
                path: octavia-ingress-controller-config.yaml

If the Pod does not assume the desired state, investigate the problem with (according to the StatefulSet naming convention, the Pods will be named <name>-0, <name>-1, <name>-2, etc., depending on the number of replicas)

Get more details
$ kubectl describe pod/octavia-ingress-controller-0
$ kubectl logs pod/octavia-ingress-controller-0

Setting up HTTP Load Balancing with Ingress





  • No labels