You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Whether our application is made up of a boundless multitude of microservices or a single monolith, one of the main needs is our ability to determine its actual load and its good working order. In other words, as soon as we put our application into production, we need to be able to continuously answer questions such as:

  • is our application working?
  • how many requests per second is it responding to?
  • what are the response times?
  • how much network traffic is it producing?
  • how stressed are the servers on which our application is located?
  • what is the http request that always responds in a very long time?
  • the database is unable to respond quickly enough, but maybe there is a bottleneck somewhere?

The Prometheus opensource monitoring solution can answer these and many other questions and addresses and solves these problems thanks also to the excellent travel companion Grafana. Grafana is a web application that creates graphs divided into panels, with data coming from a variety of different sources, such as OpenTSDB, InfluxDB, ElasticSearc and Prometheus itself.

Installation

We present a procedure that establishes the service using Docker-compose. Obviously, Docker-compose must be present on the system (if not present in your system install docker-compose).

We create a folder (eg "mkdir prometheus") in which we insert the docker-compose.yml file

docker-compose.yml
version: '3'
services:
  prometheus:
    image: prom/prometheus
    container_name: prometheus
    ports:
      - 90:9090
    restart: always
    user: '1000'
    volumes:
      - "$PWD/promdata:/prometheus"
      - "$PWD/promconf:/etc/prometheus:ro"
    command: "--config.file=/etc/prometheus/prometheus.yml --storage.tsdb.retention=90d"
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"

Always inside the prometheus folder we create 2 other folders, called promconf and promdata, where we will insert, respectively, our configurations, present in the prometheus.yml file, and storage. The latter allows you to configure Prometheus to monitor itself. The just mentioned configuration file is

prometheus.yml
global:
  scrape_interval: 15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090']

Prometheus collects metrics of monitored targets by scraping the HTTP endpoints of these targets. Since Prometheus himself exposes his internal metrics through the same mechanism, it is possible to scrape and monitor his health through the same mechanism.

Now let's launch the background service with the command

Launch Prometheus
$ docker-compose up -d
# To check the logs
$ docker-compose logs

At this point, we open the browser at the address <master_FIP>: <port>, in our case http://131.154.97.163:90. The service is exposed by default on port 9090, but we have opted for port 90, which must be opened on OpenStack, for security reasons: the port is accessible with the network of the CNAF headquarters or via VPN if you are away.

In general, if we wanted to launch Prometheus with a custom version, we can further modify the prometheus.yml file. For example, it is possible to modify the global configuration of the Prometheus server, specify the location of additional .yaml files containing rules that we want to upload to the server or define which resources should be monitored. An extensive overview of the possible configurations is available here.

  • No labels