...
Finally, we can connect to the address http://<FIP>:<port>. In our case the address, which needs the VPN, is http://131.154.97.128:91. If we can see the Kibana dashboard, it means that the procedure carried out so far is correct. However, we are not yet able to view the logs generated by our cluster. In the next paragraph we will create a connection between the cluster and the newly instanced log collection service.
Info | ||
---|---|---|
| ||
The choice of the port is not random: for security reasons, an entrance accessible via VPN or via the CNAF network has been chosen. Here we have opted for doors 91 and 92, but the range of doors that meet these safety requirements is much wider: all doors, with some exceptions, included in the 0-1023 range. |
To temporarily interrupt the execution of containers or to permanently delete them use respectively
...
Let's move on to the cluster now, to direct its logs to the newly created data collection service. Download the .yaml file from the link (look at the version of the file in the link)
...
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl delete daemonset <ds> --namespace=<ns>
daemonset.apps "filebeat" deleted |
We have therefore created a DaemonSet according to the configuration present in the .yaml file. A DaemonSet generates a Pod for each VM that makes up the cluster (3 in our case). Each Pod has the task of investigating and collecting the logs of the node in which it is located and to send them to the destination set by us. Below is an example screen of the service in operation. Of course, the mountain of data collected can be reduced through queries, by selecting fields or time frames.