Adding Monitoring to UCF Applications

Prerequisites

  • Adding monitoring to UCF applications requires that the Prometheus Operator be installed before deploying the application.

    • To do this, install the Kube Prometheus Stack.

    • --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false argument can be provided to the helm install command to to monitor all PodMonitor resources.

  • Microservices that are being used in the application are instrumented and implement a Prometheus metrics endpoint.

  • Microservice that are being used in the application have added details to the metrics section. Refer to Microservice Metrics.

PodMonitor resources

UCF Application building tools automatically generate the PodMonitor Custom Resources based on the information in metrics section of each microservice and add it to the output application helm chart.

These PodMonitor Custom Resources are used to configure the Prometheus server to scrape the microservice pods. For more information, refer to Prometheus Operator API Reference.

To view the generated PodMonitor resources, run the following command:

$ helm install --dry-run test <application-helm-chart>
...
---
# Source: myservice-test-app/templates/podmonitors.yaml
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: 'myservice-test-app-myservice-metrics-podmonitor'
  labels:
    app.kubernetes.io/instance: 'test'
    app.kubernetes.io/name: 'myservice-test-app'
spec:
  selector:
    matchLabels:
      app: myservice-myservice-deployment
      app.kubernetes.io/name: podmonitor-test-ms1
      app.kubernetes.io/version: 0.0.1
  podMetricsEndpoints:
  - port: metrics
    path: /metrics

These PodMonitor resources get automatically installed along with the application when the helm chart is deployed.

Custom labels for the PodMonitor resource

Custom labels can also be set on the PodMonitor resources in case such labels are required to be selected by the Prometheus Operator.

To do this set podMonitorLabels value during helm install:

$ helm install <release-name> <application-helm-chart> --set "podMonitorLabels.<label>=<value>"

Verify the pods are being scraped

If the application and monitoring are running fine, the pod targets should be seen in the Prometheus WebUI. To do this forward the prometheus service port:

$ kubectl get svc -l "app=kube-prometheus-stack-prometheus"
NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
prometheus-kube-prometheus-prometheus   ClusterIP   10.152.183.234   <none>        9090/TCP   2m

$ kubectl port-forward svc/prometheus-kube-prometheus-prometheus --address 0.0.0.0 9090:9090
Forwarding from 0.0.0.0:9090 -> 9090

Open http://<NODE-IP>:9090/targets in a browser where <NODE-IP> is the IP address of the machine where the kubectl port-forward command is running. The microservice pods should be visible in the targets list.

Application Information - Metrics

Building an application, generates an application information file in the output directory. This file contains list of metrics exported by the microservices in the application. For more information refer to Application Info.