...
Logs are very useful for understanding what happens within the cluster, from debugging to monitoring activities. Sometimes, however, the infrastructure where the applications run does not have native tools that offer exhaustive logs. For example, if the container, pod, or node stopped working or were deleted, we would also lose our logs. Therefore, it is advisable to store the logs separately, inside or outside the cluster, to understand the source of the problem or to reuse them in future analysis. This is called cluster-level-logging. Kubernetes does not natively have this paradigm, so sometimes you have to rely on external software. Here we will use ElasticSearch&Kibana (ESK). Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.
The procedure described in this paragraph is performed on a VM, that does not belong to the cluster. The ElasticSearch&Kibana ESK service will receive the logs from the cluster being monitored, which will have to take care to correctly point the target VM that receives its data.
Installation with Docker-compose
For the installation of ElasticSearch&Kibana ESK we will use Docker-compose (more info here). It's better to check that the version of Docker-compose is updated and it's recommended that you create a folder and place the docker-compose.yaml
file in it.
...
Code Block | ||||
---|---|---|---|---|
| ||||
# The command stops execution. To restart it use docker-compose start $ docker-compose stop # Stop and remove containers, networks, volumes and images created by "up" $ docker-compose down [options] |
...
Run FileBeat on K8s
Let's move on to the cluster now, to direct its logs to the newly created data collection service. Download the .yaml
file from the link below (more info here)
...