Search code examples
serviceterraformprometheuskubernetes-helmmonitor

Cannot see the target added to service monitor for Prometheus Operator


I am trying to set up to add the target to my service monitor for Prometheus Operator (inside my terraform that is using helm chart to deploy prometheus, prometheus operator and service monitor and a bunch of stuff). After I successfully deployed service monitor, I cannot see the new target app.kubernetes.io/instance: jobs-manager in prometheus. I am not sure what I did wrong in my configuration. I am also checking this document to see what is missing but cannot figure it out yet. Here are some configuration files concerned:

  1. /helm/charts/prometheus-abcd/templates/service_monitor.tpl
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: jobs-manager-servicemonitor
  # Change this to the namespace the Prometheus instance is running in
  namespace: prometheus
  labels:
    app: jobs-manager
    release: prometheus
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: jobs-manager # Targets jobs-manager service
  endpoints:
  - port: http
    interval: 15s

  1. /helm/charts/prometheus-abcd/Chart.yaml
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#

apiVersion: v1
appVersion: "1.0.0"
description: Prometheus Service monitor, customized for abcd
name: prometheus-abcd
version: 1.0.0

  1. /terraform/kubernetes/helm_values/prometheus.yaml
prometheus:
  podMetadata:
    annotations:
      container.apparmor.security.beta.kubernetes.io/prometheus-operator: runtime/default
      seccomp.security.alpha.kubernetes.io/pod: runtime/default

nodeAffinityPreset:
  ## Node affinity type
  ## Allowed values: soft, hard
  ##
  type: "hard"
  ## Node label key to match
  ## E.g.
  ## key: "kubernetes.io/e2e-az-name"
  ##
  key: "cloud.google.com/gke-nodepool"
  ## Node label values to match
  ## E.g.
  ## values:
  ##   - e2e-az1
  ##   - e2e-az2
  ##
  values: [
    "abcd-primary-pool"
  ]

prometheus:
  configMaps:
    - prometheus-config

## ServiceMonitors to be selected for target discovery.
## If {}, select all ServiceMonitors
##
serviceMonitorSelector: {
  jobs-manager-servicemonitor
}
# matchLabels:
#   foo: bar

## Namespaces to be selected for ServiceMonitor discovery.
## See https://github.com/prometheusoperator/prometheusoperator/blob/master/
## Documentation/api.md#namespaceselector for usage
##
serviceMonitorNamespaceSelector: {
  matchNames: prometheus
}

When running this command: kubectl get -n prometheus prometheuses.monitoring.coreos.com prometheus-kube-prometheus-prometheus I can see that the service monitor was successfully deployed:

serviceMonitordeployed

But when I run this command: kubectl describe -n prometheus prometheuses.monitoring.coreos.com prometheus-kube-prometheus-prometheus I see that many parameters still have missing values such as serviceMonitorSelector

Name:         prometheus-kube-prometheus-prometheus
Namespace:    prometheus
Labels:       app.kubernetes.io/component=prometheus
              app.kubernetes.io/instance=prometheus
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kube-prometheus
              helm.sh/chart=kube-prometheus-3.4.0
Annotations:  meta.helm.sh/release-name: prometheus
              meta.helm.sh/release-namespace: prometheus
API Version:  monitoring.coreos.com/v1
Kind:         Prometheus
Metadata:
  Creation Timestamp:  2021-05-26T15:19:42Z
  Generation:          1
  Managed Fields:
    API Version:  monitoring.coreos.com/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
        f:labels:
          .:
          f:app.kubernetes.io/component:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
          f:helm.sh/chart:
      f:spec:
        .:
        f:affinity:
          .:
          f:podAntiAffinity:
            .:
            f:preferredDuringSchedulingIgnoredDuringExecution:
        f:alerting:
          .:
          f:alertmanagers:
        f:configMaps:
        f:enableAdminAPI:
        f:externalUrl:
        f:image:
        f:listenLocal:
        f:logFormat:
        f:logLevel:
        f:paused:
        f:podMetadata:
          .:
          f:labels:
            .:
            f:app.kubernetes.io/component:
            f:app.kubernetes.io/instance:
            f:app.kubernetes.io/name:
        f:podMonitorNamespaceSelector:
        f:podMonitorSelector:
        f:probeNamespaceSelector:
        f:probeSelector:
        f:replicas:
        f:retention:
        f:routePrefix:
        f:ruleNamespaceSelector:
        f:ruleSelector:
        f:securityContext:
          .:
          f:fsGroup:
          f:runAsUser:
        f:serviceAccountName:
        f:serviceMonitorNamespaceSelector:
        f:serviceMonitorSelector:
    Manager:         Go-http-client
    Operation:       Update
    Time:            2021-05-26T15:19:42Z
  Resource Version:  11485229
  Self Link:         /apis/monitoring.coreos.com/v1/namespaces/prometheus/prometheuses/prometheus-kube-prometheus-prometheus
  UID:               xxxxxxxxxxxxxxxxxxxx
Spec:
  Affinity:
    Pod Anti Affinity:
      Preferred During Scheduling Ignored During Execution:
        Pod Affinity Term:
          Label Selector:
            Match Labels:
              app.kubernetes.io/component:  prometheus
              app.kubernetes.io/instance:   prometheus
              app.kubernetes.io/name:       kube-prometheus
          Namespaces:
            prometheus
          Topology Key:  kubernetes.io/hostname
        Weight:          1
  Alerting:
    Alertmanagers:
      Name:         prometheus-kube-prometheus-alertmanager
      Namespace:    prometheus
      Path Prefix:  /
      Port:         http
  Config Maps:
    prometheus-config
  Enable Admin API:  false
  External URL:      http://prometheus-kube-prometheus-prometheus.prometheus:9090/
  Image:             docker.io/bitnami/prometheus:2.24.0-debian-10-r1
  Listen Local:      false
  Log Format:        logfmt
  Log Level:         info
  Paused:            false
  Pod Metadata:
    Labels:
      app.kubernetes.io/component:  prometheus
      app.kubernetes.io/instance:   prometheus
      app.kubernetes.io/name:       kube-prometheus
  Pod Monitor Namespace Selector:
  Pod Monitor Selector:
  Probe Namespace Selector:
  Probe Selector:
  Replicas:      1
  Retention:     10d
  Route Prefix:  /
  Rule Namespace Selector:
  Rule Selector:
  Security Context:
    Fs Group:            1001
    Run As User:         1001
  Service Account Name:  prometheus-kube-prometheus-prometheus
  Service Monitor Namespace Selector:
  Service Monitor Selector:
Events:  <none>

This is why I check this document to get the template for serviceMonitorSelector and also serviceMonitorNamespaceSelector and added them to the prometheus.yaml file above but not sure if it is correctly added.

Anyone with experience setting up service monitor with helm and terraform, could you please help me check what I did wrong? Thank you in advance.


Solution

  • the way you have passed value in prometheus.yaml is wrong

    serviceMonitorNamespaceSelector: {
      matchNames: prometheus
    }                                  #this is wrong way
    

    you should have to set the values like :

    serviceMonitorNamespaceSelector:
           matchLabels:
             prometheus: somelabel
    

    also same for

    serviceMonitorSelector: {
      jobs-manager-servicemonitor
    }
    

    it's not set proper way.

    for reference please check : https://github.com/prometheus-community/helm-charts/blob/83aa113f52e5f45fd04b4dd909172a6da1826592/charts/kube-prometheus-stack/values.yaml#L2034

    checkout this nice example : https://rtfm.co.ua/en/kubernetes-a-clusters-monitoring-with-the-prometheus-operator/

    Prometheus operator with Terraform & helm : https://github.com/OpenQAI/terraform-helm-release-prometheus-operator