...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: apps/v1 kind: Deployment metadata: name: coffee spec: replicas: 2 selector: matchLabels: app: coffee template: metadata: labels: app: coffee spec: containers: - name: coffee image: nginxdemos/nginx-hello:plain-text ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: coffee-svc labels: app: coffee spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: coffee type: NodePort # <--- Pay attention --- apiVersion: apps/v1 kind: Deployment metadata: name: tea spec: replicas: 3 selector: matchLabels: app: tea template: metadata: labels: app: tea spec: containers: - name: tea image: nginxdemos/nginx-hello:plain-text ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: tea-svc labels: app: tea spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: tea type: NodePort # <--- Pay attention |
We could verify the service using its CLUSTER-IP on Kubernetes master nodeFor a quick check, we can contact, for example with a curl
, the services at the address in the CLUSTER-IP
column. Note the system-generated values for the NodePorts of the two services (in this case 31156 and 30458).
...
Apply it and verify that Ingress Resource has been created. Please note that the IP address will not be defined right away (wait for the ADDRESS
field to get populated). It is possible to follow the implementation of the LB step by step, from the creation of its components (Listener, Pool, Policy) to the assignment of the FIP, from the log of the Pod of the Ingress Controller (the whole operation can take a few minutes).
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl get ing NAME CLASS HOSTS ADDRESS PORTS AGE lb <none> webserver-bar.com 131.154.97.200 80 3d1h |
Using a browser or the curl
command, you should be able to access the backend service by sending HTTP request to the domain name specified in the Ingress Resource (remember to hook the hostname to the FIP in the /etc/hosts
of the machine from which the request to the service starts)
...
Go to OpenStack and check that the LB has been created in the Project/Network/LoadBalancer tab. We navigate the LB to verify that its internal components were created and, in particular, how they were created. Let's analyze, by way of example, the L7 Rule that was automatically generated to reach the webserver-bar.com/tea address. Here we can get a taste of the convenience of using this approach. In addition to building and automatically configuring the components that make up the LB, this also generates the Policies and Rule on the basis of what is written in the configuration file of the Ingress Resource. This help is more appreciable the more complex the structure of your web server is.
A speech remained pending. I'm talking about the ConfigMap manage-security-groups
key. If set to true, a new group would appear in the Project/Network/SecurityGroups tab, containing the NodePorts of the implemented services (in this case the ports 31156 and 30458 of the coffee and tea services). In this way, the group can be associated with cluster instances. Why did I recommend setting the value to false? Because it would be useless, since the ports are already open. In fact, when installing Kubeadm (see "Preliminary steps" in cap. 2), it is recommended to open the range of ports 30000-32767 for WorkerNodes, which corresponds to the range in which a NodePort can be generated.