The procedure illustrated in this paragraph is carried out on a VM outside the cluster. The ElasticSearch & Kibana service will receive the logs from the cluster being monitored, which will have to take care to correctly point the target VM that receives its data.
For the installation of ElasticSearch and Kibana we will use Docker-Compose (it is better to check that the version of Docker-Compose is updated). It is recommended that you create a folder and place the docker-compose.yml file in it.
version: '3.3' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0 # <--- get the latest version container_name: es01 environment: - node.name=es01 - cluster.name=es-docker-cluster - discovery.type=single-node - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data01:/usr/share/elasticsearch/data ports: - 92:9200 # <--- change host port. Here we used 92 networks: - elastic k01: container_name: k01 image: docker.elastic.co/kibana/kibana:7.8.0 # <--- get the latest version environment: SERVER_NAME: kibana ELASTICSEARCH_HOSTS: http://es01:9200 ports: - 91:5601 # <--- change host port. Here we used 91 networks: - elastic volumes: data01: driver: local networks: elastic: driver: bridge
Open the ports indicated in the file on OpenStack and then launch, inside the folder just created, the command (if the file name is different from docker-compose.yml, then specify it after the "-f" option)
$ docker-compose [-f <nome_file>] up -d Starting es01 ... done Starting k01 ... done
The command starts the background service in the shell (it takes a few seconds to allow processing). We then check that the containers are running using
$ docker-compose ps Name Command State Ports -------------------------------------------------------------------------------- es01 /tini -- /usr/local/bin/do ... Up 0.0.0.0:92->9200/tcp, 9300/tcp k01 /usr/local/bin/dumb-init - ... Up 0.0.0.0:91->5601/tcp
or equally with the command
$ docker ps | grep elastic CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 105b1538d0e3 docker.elastic.co/kibana/kibana:7.8.0 "/usr/local/bin/dumb…" 2h ago Up 2h 0.0.0.0:91->5601/tcp k01 436366264e1e docker.elastic.co/elasticsearch/elasticsearch:7.8.0 "/tini -- /usr/local…" 2h ago Up 2h 9300/tcp, 0.0.0.0:92->9200/tcp es01
Finally, we can connect to the address http://<FIP>:<port>. In our case the address, which needs the VPN, is http://131.154.97.128:91. The choice of the port is not random: for security reasons, an entrance accessible via VPN or via the CNAF network has been chosen. Here we have opted for doors 91 and 92, but the range of doors that meet these safety requirements is much wider: all doors, with some exceptions, included in the 0-1023 range.
To temporarily interrupt the execution of containers or to permanently delete them use respectively
# The command Stops execution. To restart it use docker-compose start $ docker-compose stop # Stop and remove containers, networks, volumes and images created by "up" $ docker-compose down [options]
Remember to run the docker-compose command inside the folder where the .yaml file is located.
Log Deployment with FileBeat
Let's move on to the cluster now, to direct its logs to the newly created data collection service. Download the .yaml file from the link (look at the version of the file in the link)
$ curl -LO https://raw.githubusercontent.com/elastic/beats/7.8/deploy/kubernetes/filebeat-kubernetes.yaml
and modify the lines highlighted by the comments in the following extract (to allow the creation of Pods also on the master, add the lines shown at the bottom)
output.elasticsearch: hosts:['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:92}'] # <--- Enter the desired port username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} ----------------------------------------------------- env: name: ELASTICSEARCH_HOST value: 131.154.97.128 # <--- Enter the host FIP with elasticsearch name: ELASTICSEARCH_PORT value: "92" # <--- Enter the port (like above) name: ELASTICSEARCH_USERNAME value: elastic name: ELASTICSEARCH_PASSWORD value: changeme name: ELASTIC_CLOUD_ID value: name: ELASTIC_CLOUD_AUTH value: name: NODE_NAME ----------------------------------------------------- # this toleration is to have the daemonset runnable on master nodes. Remove it if your masters can't run pods spec: template: spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule
Finally, launch FileBeat as DaemonSet, which ensures that there is an agent for each node of the Kubernetes cluster
$ kubectl create -f filebeat-kubernetes.yaml configmap/filebeat-config created daemonset.apps/filebeat created clusterrolebinding.rbac.authorization.k8s.io/filebeat created clusterrole.rbac.authorization.k8s.io/filebeat created serviceaccount/filebeat created
To consult the cluster DaemonSets, run the command
$ kubectl get daemonset --all-namespaces NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system calico-node 3 3 3 3 3 kubernetes.io/os=linux 13d kube-system filebeat 3 3 3 3 3 <none> 8m26s kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 13d
To remove the DaemonSet use the command (or via the Kubernetes dashboard)
$ kubectl delete daemonset <ds> --namespace=<ns> daemonset.apps "filebeat" deleted