Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languageyml
titledocker-compose.yaml
collapsetrue
version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.16.1		#3  # <--- get the latest version
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic
  k01:
    image: docker.elastic.co/kibana/kibana:7.16.1	3 # <--- get the latest version
    container_name: k01 
    environment:
      SERVER_NAME: kibana
      ELASTICSEARCH_HOSTS: http://es01:9200
    ports:
      - 560180:5601
    networks:
      - elastic
volumes:
  data01:
    driver: local
networks:
  elastic:
    driver: bridge

...

Code Block
languagebash
titleVerify service (alternative method)
$ docker ps | grep elastic
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS   PORTS                                                 NAMES 
119eb7e8b36a   kibana:7.16.13               "/bin/tini -- /usr/l…"   2 minutes ago   Up       0.0.0.0:5601->5601/tcp, :::5601->5601/tcp             k01
e902ffce0b93   elasticsearch:7.16.13    "/bin/tini -- /usr/l…"   2 minutes ago   Up       9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp   es01

Finally, we can connect to the address http://<FIP>:<port>. If we can see the Kibana dashboard, it means that the procedure carried out so far is correct (it may take a couple of minutes from startup to service activation). However, we are not yet able to view the logs generated by our cluster. In the next paragraph we will create a connection between the cluster and the newly instanced log collection service. Conversely, to temporarily interrupt the execution of containers or to permanently delete them use respectively (remember to run these commands inside the folder where the .yaml file is located)

...

We have therefore created a DaemonSet according to the configuration present in the .yaml file. A DaemonSet generates a Pod for each VM that makes up the cluster (3 in our case). Each Pod has the task of investigating and collecting the logs of the node in which it is located and to send them to the destination set by us. Below is an example screen of the service in operation (Analytics/Discover). Of course, the mountain of data collected can be reduced through queries, by selecting fields or time frames.

Kibana DashboardImage RemovedImage Added