The procedure illustrated in this paragraph is carried out on a VM outside the cluster. The ElasticSearch & Kibana service will receive the logs from the cluster being monitored, which will have to take care to correctly point the target VM that receives its data.
For the installation of ElasticSearch and Kibana we will use Docker-Compose (it is better to check that the version of Docker-Compose is updated). It is recommended that you create a folder and place the docker-compose.yml file in it.
No Format |
---|
version: '3.3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0 # <--- get the latest version
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 92:9200 # <--- change host port. Here we used 92
networks:
- elastic
k01:
container_name: k01
image: docker.elastic.co/kibana/kibana:7.8.0 # <--- get the latest version
environment:
SERVER_NAME: kibana
ELASTICSEARCH_HOSTS: http://es01:9200
ports:
- 91:5601 # <--- change host port. Here we used 91
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge |
Open the ports indicated in the file on OpenStack and then launch, inside the folder just created, the command (if the file name is different from docker-compose.yml, then specify it after the "-f" option)
Code Block |
---|
language | bash |
---|
title | Start service |
---|
|
$ docker-compose [-f <nome_file>] up -d
Starting es01 ... done Starting k01 ... done |
The command starts the background service in the shell (it takes a few seconds to allow processing). We then check that the containers are running using
Code Block |
---|
language | bash |
---|
title | Verify service |
---|
|
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
es01 /tini -- /usr/local/bin/do ... Up 0.0.0.0:92->9200/tcp, 9300/tcp
k01 /usr/local/bin/dumb-init - ... Up 0.0.0.0:91->5601/tcp |
or equally with the command
Code Block |
---|
language | bash |
---|
title | Verify service (alternative method) |
---|
|
$ docker ps | grep elastic
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
105b1538d0e3 docker.elastic.co/kibana/kibana:7.8.0 "/usr/local/bin/dumb…" 2h ago Up 2h 0.0.0.0:91->5601/tcp k01
436366264e1e docker.elastic.co/elasticsearch/elasticsearch:7.8.0 "/tini -- /usr/local…" 2h ago Up 2h 9300/tcp, 0.0.0.0:92->9200/tcp es01 |
Finally, we can connect to the address http://<FIP>:<port>. In our case the address, which needs the VPN, is http://131.154.97.128:91. The choice of the port is not random: for security reasons, an entrance accessible via VPN or via the CNAF network has been chosen. Here we have opted for doors 91 and 92, but the range of doors that meet these safety requirements is much wider: all doors, with some exceptions, included in the 0-1023 range.
To temporarily interrupt the execution of containers or to permanently delete them use respectively
Code Block |
---|
language | bash |
---|
title | Stop or remove the service |
---|
|
# The command Stops execution. To restart it use docker-compose start
$ docker-compose stop
# Stop and remove containers, networks, volumes and images created by "up"
$ docker-compose down [options] |
Remember to run the docker-compose command inside the folder where the .yaml file is located.
Log Deployment with FileBeat
Let's move on to the cluster now, to direct its logs to the newly created data collection service. Download the .yaml file from the link (look at the version of the file in the link)
Code Block |
---|
language | bash |
---|
title | Filebeat |
---|
|
$ curl -LO https://raw.githubusercontent.com/elastic/beats/7.8/deploy/kubernetes/filebeat-kubernetes.yaml |