Search code examples
prometheus-operatorkube-prometheus-stackservicemonitor

Add PodMonitor or ServiceMonitor outside of kube-prometheus-stack helm values


Using kube-prometheus-stack helm chart, version 35.2.0. So far, I add my custom PrometheusRules, PodMonitor and ServiceMonitor via helm custom values.

helm install my-kubpromstack prometheus-community/kube-prometheus-stack -n monitoring \
  -f my-AlertRules.yaml \
  -f my-PodMonitor.yaml

Or in case of changes in the PrometheusRules or PodMonitor, I use helm upgrade. The custom values are defined based on kube-prometheus-stack/values.yaml. Where I define prometheus.additionalPodMonitors and additionalPrometheusRulesMap in separate YAML files

helm upgrade my-kubpromstack -n monitoring \
  --reuse-values \
  -f my-AlertRules.yaml \
  -f my-PodMonitor.yaml

QUESTION: how to make the Prometheus server from kube-prometheus-stack aware of rules, podmonitor, servicemonitor created outside of the helm values?

For example, the PodMonitor definition below is NOT picked-up by Prometheus (ie doesn't appear in the targets in Prometheus UI).

kubectl apply -f - << EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: cluster-operator-metrics
  labels:
    app: strimzi
spec:
  selector:
    matchLabels:
      strimzi.io/kind: cluster-operator
  namespaceSelector:
    matchNames:
      - my-strimzi
  podMetricsEndpoints:
  - path: /metrics
    port: http
EOF

The pod to monitor has a label strimzi.io/kind: cluster-operator and is located in my-strimzi namespace. I would expect the podmonitor above to be recognized by Prometheus automatically. Because the default podMonitorSelector: {} in kube-prometheus-stack/values.yaml has a comment that says:

    ## PodMonitors to be selected for target discovery.
    ## If {}, select all PodMonitors

EDIT: Looks like this question is useful to quite some people. The simplest solution is what Aris Chow suggested below. Set the custom helm values as below:

prometheus:
  prometheusSpec:
    podMonitorSelectorNilUsesHelmValues: false
    probeSelectorNilUsesHelmValues: false
    ruleSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false

Solution

  • If you define prometheus.prometheusSpec.podMonitorSelectorNilUseHelmValues as false (in values.yaml, it is set to true by default) you could achieve your goal. As the value is true, it would just try to set a release label for matching PodMonitor, which your own definition does not include.

    Or, you could leave it as true and set prometheus.prometheusSpec.podMonitorSelector as:

    matchLabels:
      prometheus: "true"
    

    And add label prometheus: "true" in your podmonitor.yaml.

    Click here to check the code if you are intereseted in details.

    Please note that the chart version in this link is 15.4.4, you should change to the version you are using just in case there are any update.