Search code examples
kubernetesprometheuskubernetes-helm

Can't see Redis exporter in Prometheus


On my local KinD cluster, I deployed the kube-prometheus-stack with the default values file. Prometheus is configured inside my prometheus namespace.

In another namespace redis, I installed redis-ha using the following values file:

image:
  repository: redis/redis-stack-server
  tag: 7.2.0-v6
  pullPolicy: IfNotPresent

replicas: 1

redis:
  config:
    protected-mode: "no"
    min-replicas-to-write: 0
    loadmodule: /opt/redis-stack/lib/redisbloom.so

  disableCommands:
    - FLUSHALL

exporter:
  enabled: true
  image: oliver006/redis_exporter
  tag: v1.57.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  portName: exporter-port
  scrapePath: /metrics

  # Address/Host for Redis instance. Default: localhost
  # Exists to circumvent issues with IPv6 dns resolution that occurs on certain environments
  ##
  address: localhost

  ## Set this to true if you want to connect to redis tls port
  # sslEnabled: true

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

  serviceMonitor:
    # When set true then use a ServiceMonitor to configure scraping
    enabled: true
    # Set the namespace the ServiceMonitor should be deployed
    namespace: "prometheus"
    # Set how frequently Prometheus should scrape
    interval: 15s
    # Set path to redis-exporter telemtery-path
    # telemetryPath: /metrics
    # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
    labels:
      app: redis-ha
    # Set timeout for scrape
    # timeout: 10s
    # Set additional properties for the ServiceMonitor endpoints such as relabeling, scrapeTimeout, tlsConfig, and more.
    endpointAdditionalProperties: {}

  # prometheus exporter SCANS redis db which can take some time
  # allow different probe settings to not let container crashloop
  livenessProbe:
    initialDelaySeconds: 15
    timeoutSeconds: 3
    periodSeconds: 15

  readinessProbe:
    initialDelaySeconds: 15
    timeoutSeconds: 3
    periodSeconds: 15
    successThreshold: 2

The above values file created the following ServiceMonitor:

Name:         redis-ha
Namespace:    prometheus
Labels:       app=redis-ha
              app.kubernetes.io/managed-by=Helm
              chart=redis-ha-4.26.1
              heritage=Helm
              release=redis-ha
Annotations:  meta.helm.sh/release-name: redis-ha
              meta.helm.sh/release-namespace: redis
API Version:  monitoring.coreos.com/v1
Kind:         ServiceMonitor
Metadata:
  Creation Timestamp:  2024-02-27T08:45:59Z
  Generation:          2
  Resource Version:    2036
  UID:                 44d3e5c7-2ca1-4434-adaa-e29c1e3cb4da
Spec:
  Endpoints:
    Interval:     15s
    Path:         /metrics
    Target Port:  9121
  Job Label:      redis-ha
  Namespace Selector:
    Match Names:
      redis
  Selector:
    Match Labels:
      App:       redis-ha
      Exporter:  enabled
      Release:   redis-ha
Events:          <none>

In Prometheus, I can see redis related metrics thanks to auto-complete when I type redis, but when I go to "Targets" or "Service Discovery" I don't see my redis exporter. I checked the Prometheus exporter logs but didn't find any errors. The redis-exporter is up and running, the labels seem to match.

I don't understand why would I see the metrics but not the target.


Solution

  • I ended up changing serviceMonitorSelector and serviceMonitorNamespaceSelector in my Prometheus settings to match the namespace and label of the redis app. I thought having it set to {} would work, so it seems like a permissions issue.

    Edit: it wasn't a permissions issue. I looked at the prom-stack values file and saw that serviceMonitorSelector would default to the release name. So I simply added prom-stack (in my case) to the service monitor's labels and now it works :)