You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

The importance of logs

Logs are very useful for understanding what happens within the cluster, from debugging to monitoring activities. Sometimes, however, the infrastructure where the applications run does not have native tools that offer exhaustive logs. For example, if the container, pod, or node stopped working or were deleted, we would also lose our logs. Therefore, it is advisable to store the logs separately, inside or outside the cluster, to understand the source of the problem or to reuse them in future analysis. This is called cluster-level-logging. Kubernetes does not natively have this paradigm, so sometimes you have to rely on external software. Here we will use ElasticSearch&Kibana. Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

The procedure described in this paragraph is performed on a VM, that does not belong to the cluster. The ElasticSearch&Kibana service will receive the logs from the cluster being monitored, which will have to take care to correctly point the target VM that receives its data.

Service installation

For the installation of ElasticSearch and Kibana we will use Docker-Compose (it is better to check that the version of Docker-Compose is updated). It is recommended that you create a folder and place the docker-compose.yml file in it.

docker-compose.yml
version: '3.3'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0		# <--- get the latest version
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 92:9200		# <--- change host port. Here we used 92
    networks:
      - elastic
  k01:
    container_name: k01
    image: docker.elastic.co/kibana/kibana:7.9.0	# <--- get the latest version
    environment:
      SERVER_NAME: kibana
      ELASTICSEARCH_HOSTS: http://es01:9200
    ports:
      - 91:5601		# <--- change host port. Here we used 91
    networks:
      - elastic
volumes:
  data01:
    driver: local
networks:
  elastic:
    driver: bridge

Open the ports indicated in the file on OpenStack and then launch, inside the folder just created, the command (if the file name is different from docker-compose.yml, then specify it after the -f option)

Start service
$ docker-compose [-f <file_name>] up -d
Starting es01 ... done
Starting k01  ... done

The command starts the background service in the shell (it takes a few seconds to allow processing). We then check that the containers are running using

Verify service
$ docker-compose ps
Name              Command               State                Ports
--------------------------------------------------------------------------------
es01   /tini -- /usr/local/bin/do ...   Up      0.0.0.0:92->9200/tcp, 9300/tcp
k01    /usr/local/bin/dumb-init - ...   Up      0.0.0.0:91->5601/tcp

or equally with the command

Verify service (alternative method)
$ docker ps | grep elastic
CONTAINER ID  IMAGE                                                COMMAND                 CREATED  STATUS  PORTS                           NAMES
105b1538d0e3  docker.elastic.co/kibana/kibana:7.8.0                "/usr/local/bin/dumb…"  2h ago   Up 2h   0.0.0.0:91->5601/tcp            k01
436366264e1e  docker.elastic.co/elasticsearch/elasticsearch:7.8.0  "/tini -- /usr/local…"  2h ago   Up 2h   9300/tcp, 0.0.0.0:92->9200/tcp  es01

Finally, we can connect to the address http://<FIP>:<port>. In our case the address, which needs the VPN, is http://131.154.97.128:91. If we can see the Kibana dashboard, it means that the procedure carried out so far is correct (it may take a couple of minutes from startup to service activation). However, we are not yet able to view the logs generated by our cluster. In the next paragraph we will create a connection between the cluster and the newly instanced log collection service.

Port

The choice of the port is not random: for security reasons, an entrance accessible via VPN or via the CNAF network has been chosen. Here we have opted for doors 91 and 92, but the range of doors that meet these safety requirements is much wider: all doors, with some exceptions, included in the 0-1023 range.

To temporarily interrupt the execution of containers or to permanently delete them use respectively

Stop or remove the service
# The command Stops execution. To restart it use docker-compose start
$ docker-compose stop
# Stop and remove containers, networks, volumes and images created by "up"
$ docker-compose down [options]

Remember to run the docker-compose command inside the folder where the .yaml file is located.

Log deployment with FileBeat

Let's move on to the cluster now, to direct its logs to the newly created data collection service. Download the .yaml file from the link (look at the version of the file in the link)

Download filebeat
$ curl -LO https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml

and modify the lines highlighted by the comments in the following extract (to allow the creation of Pods also on the master, add the lines shown at the bottom)

Changes
output.elasticsearch:
  hosts:['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:92}']	# <--- Enter the desired port
  username: ${ELASTICSEARCH_USERNAME}
  password: ${ELASTICSEARCH_PASSWORD}
-----------------------------------------------------
env:
  name: ELASTICSEARCH_HOST	
  value: 131.154.97.128		# <--- Enter the host FIP with elasticsearch
  name: ELASTICSEARCH_PORT 
  value: "92"				# <--- Enter the port (like above)
  name: ELASTICSEARCH_USERNAME
  value: elastic
  name: ELASTICSEARCH_PASSWORD
  value: changeme
  name: ELASTIC_CLOUD_ID
  value:
  name: ELASTIC_CLOUD_AUTH
  value:
  name: NODE_NAME
-----------------------------------------------------
# this toleration is to have the daemonset runnable on master nodes. Remove it if your masters can't run pods
spec:
  template:
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master       
        effect: NoSchedule

Finally, launch FileBeat as DaemonSet, which ensures that there is an agent for each node of the Kubernetes cluster

Lanch filebeat
$ kubectl create -f filebeat-kubernetes.yaml
configmap/filebeat-config created
daemonset.apps/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created

To consult the cluster DaemonSets, run the command

Check DaemonSet
$ kubectl get daemonset --all-namespaces
NAMESPACE     NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   calico-node   3         3         3       3            3           kubernetes.io/os=linux   13d
kube-system   filebeat      3         3         3       3            3           <none>                   8m26s
kube-system   kube-proxy    3         3         3       3            3           kubernetes.io/os=linux   13d

To remove the DaemonSet use the command (or via the Kubernetes dashboard)

Remove DaemonSet
$ kubectl delete daemonset <ds> -n <namespace>
daemonset.apps "filebeat" deleted

We have therefore created a DaemonSet according to the configuration present in the .yaml file. A DaemonSet generates a Pod for each VM that makes up the cluster (3 in our case). Each Pod has the task of investigating and collecting the logs of the node in which it is located and to send them to the destination set by us. Below is an example screen of the service in operation. Of course, the mountain of data collected can be reduced through queries, by selecting fields or time frames.

Kibana Dashboard

  • No labels