After installing Prometheus, by connecting to its dashboard, you will notice a list of services already under monitoring: these are useful resources for the functioning of Prometheus itself (e.g. alertmanager, Grafana) or typical of each K8s cluster (apiserver, controller-manager, scheduler). But how do you monitor an application in general? As an example, let's try to monitor Longhorn, a distributed block storage system for Kubernetes.
ServiceMonitor
The heart of everything are the service monitors, an intermediate layer between the application to be monitored and the Prometheus operator. Within this component it is necessary to indicate which are the "coordinates" of the K8s service associated with the application. In our case we are interested to the longhorn-backend
service, which is a service that points to the set of Longhorn manager pods. Let's try to print some information about it
$ kubectl get svc -n longhorn-system --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
csi-attacher ClusterIP 10.233.13.82 <none> 12345/TCP 24h app=csi-attacher,longhorn.io/managed-by=longhorn-manager
csi-provisioner ClusterIP 10.233.62.177 <none> 12345/TCP 24h app=csi-provisioner,longhorn.io/managed-by=longhorn-manager
csi-resizer ClusterIP 10.233.40.134 <none> 12345/TCP 24h app=csi-resizer,longhorn.io/managed-by=longhorn-manager
csi-snapshotter ClusterIP 10.233.19.24 <none> 12345/TCP 24h app=csi-snapshotter,longhorn.io/managed-by=longhorn-manager
longhorn-backend ClusterIP 10.233.19.4 <none> 9500/TCP 24h app.kubernetes.io/instance=longhorn,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=longhorn,app.kubernetes.io/version=v1.1.2,app=longhorn-manager,helm.sh/chart=longhorn-1.1.2
longhorn-frontend ClusterIP 10.233.62.6 <none> 80/TCP 24h app.kubernetes.io/instance=longhorn,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=longhorn,app.kubernetes.io/version=v1.1.2,app=longhorn-ui,helm.sh/chart=longhorn-1.1.2
$ kubectl get -n longhorn-system svc longhorn-backend -oyaml
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: longhorn
meta.helm.sh/release-namespace: longhorn-system
labels:
app: longhorn-manager
app.kubernetes.io/instance: longhorn
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: longhorn
app.kubernetes.io/version: v1.1.2
helm.sh/chart: longhorn-1.1.2
name: longhorn-backend
namespace: longhorn-system
spec:
clusterIP: 10.233.19.4
clusterIPs:
- 10.233.19.4
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: manager
port: 9500
protocol: TCP
targetPort: manager
selector:
app: longhorn-manager
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
We note that by using the app=longhorn-manager
label, we are able to isolate the longhorn-backend
service. So we can write a service monitor like this
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: longhorn-prometheus-servicemonitor
namespace: monitoring
labels:
release: mon
spec:
selector:
matchLabels:
app: longhorn-manager
namespaceSelector:
matchNames:
- longhorn-system
endpoints:
- port: manager
Inside the spec
there are all the information needed by the service monitor to point the longhorn-backend service. As mentioned before, the service monitor is an intermediate layer, which must in turn be linked to the Prometheus operator. The latter uses the serviceMonitorSelector
to hook the service monitors. In our case we have
# This is just an excerpt from the output of the "describe" command
$ kubectl describe -n monitoring prometheus
spec:
Service Monitor Selector:
Match Labels:
Release: mon
After a few seconds, connecting to the Target list of the Prometheus dashboard, you will notice a new line. Of course, you will have Longhorn Metrics for Monitoring available to keep your application under control at all times.
LH service monitor
LH metrics