...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ kubectl logs -n monitoring pod/mon-kube-prometheus-stack-operator-bcf97f54f-w8s7j level=info ts=2021-08-20T10:44:28.791312171Z caller=operator.go:1221 component=prometheusoperator key=monitoring/mon-kube-prometheus-stack-prometheus msg="sync prometheus" level=info ts=2021-08-20T10:44:28.875362927Z caller=operator.go:742 component=alertmanageroperator key=monitoring/mon-kube-prometheus-stack-alertmanager msg="sync alertmanager" $ kubectl logs -n monitoring pod/alertmanager-mon-kube-prometheus-stack-alertmanager-0 level=info ts=2021-08-20T10:34:03.964Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml level=info ts=2021-08-20T10:34:03.964Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml |
From the above logs you can read what is the path, inside the Pod Alertmanager-0
, in which the newly added configurations are saved. This information can also be found in the Alertmanager dashboard. To reach it, simply expose the service (via ingress or NodePort)
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ kubectl get -n monitoring svc mon-kube-prometheus-stack-alertmanager
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mon-kube-prometheus-stack-alertmanager ClusterIP 10.233.21.103 <none> 9093/TCP 9d |
Within the dashboard you will find the same alerts of the Prometheus UI and, moreover, you can silence them for a defined period of time, filter them if there were many and much more. Finally, try to generate some alerts, if you do not want to wait for some error to occur spontaneously, and verify the receipt of the e-mail at the address indicated in the configuration of the AlertmanagerConfig
component. You can, for example, reactivate the rateGraph or rateAlerts rules already used previously, as in the screenshot below