Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We will then explain how to create a connection point between these two components. The joining point consists of a file, containing the access credentials (username, password) and the coordinates (projectID, region) of the OpenStack tenant that we want to link. The purpose, in fact, is to automatically create the LB and its components (Listener, Pool, Policy, etc.), starting from the Kubernetes cluster. Finally, we point out that this page is based on a GitHub guide, which you can reach from here.

Deploy octavia-ingress-controller in the Kubernetes cluster

First, let's create and move inside the following folder, which will encapsulate the files we will use in this guide. We will create the various components under the kube-system namespace, but you are free to use another one, of course.

...

Code Block
languageyml
titleGrant permissions
kind: ServiceAccount
apiVersion: v1
metadata:
  name: octavia-ingress-controller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: octavia-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: octavia-ingress-controller
    namespace: kube-system

Prepare octavia-ingress-controller configuration

As announced in the introduction, the octavia-ingress-controller needs to communicate with OpenStack cloud to create resources corresponding to the Kubernetes Ingress resource, so the credentials of an OpenStack user (doesn't need to be the admin user) need to be provided in in openstack section. Additionally, in order to differentiate the Ingresses between kubernetes clusters, cluster-name needs to be unique.

Code Block
languageyml
titleConfiguration
kind: ConfigMap
apiVersion: v1
metadata:
  name: octavia-ingress-controller-config
  namespace: kube-system
data:
  config: |
    cluster-name: <cluster_name>
    openstack:
      # domain-name: <domain_name>	# Choose between domain-name or domain-id (do not use together)
      domain-id: <domain_id>
      username: <username>
      # user-id: <user_id>			# Choose between user-id or username (do not use together)
      password: <user_id>
      project-id: <project_id>
	  auth-url: <auth_url>
      region: <region>
    octavia:
      subnet-id: <subnet_id>
      floating-network-id: <public_net_id>
      manage-security-groups: <true/false> # If true, creates automatically SecurityGroup

Let's see how to fill the placeholders in the configuration file, going to retrieve the information from the Horizon dashboard:

  • domain-name, domain-id, username, user-id, password: all this information (except the password) can be found in the Identity/Users tab, by selecting the desired user.
Info
titleAdvice

It's advisable to create a service account associated to your project, if the is shared with other users, and use the credentials of this account. To get a service account you need to ask the Cloud@CNAF administrators. However, for testing purposes, for the moment you can use your personal credentials (username/password).


  • project-id, auth-url: go to the Project/API Access tab and click the button View Credentials.
  • region: the region is present at the top left, next to the OpenStack logo.
  • subnet-id, floating-network-id: go to the Project/Network/Networks tab. Retrieve the ID of the public network and the sub-network (be careful not to get confused with the network ID).