I have installed the kube-prometheus stack on k8s via helm:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring -f alertmanager-config.yaml
where the alertmanager-config.yaml
looks as follows:
alertmanager:
config:
global:
resolve_timeout: 5m
route:
group_wait: 20s
group_interval: 4m
repeat_interval: 4h
receiver: 'null'
routes:
- receiver: 'slack-k8s-admin'
receivers:
- name: 'null'
- name: 'slack-k8s-admin'
slack_configs:
- api_url: 'https://hooks.slack.com/services/...'
channel: '#k8s-monitoring'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
I can receive the alerts after I configured them, however they never show me from which cluster they stem:
Since, I do have several clusters that should send to the same channel, it would be good to get notified which cluster has issues.
Do you know how I have to adapt the config?
You can read the cluster name from Prometheus' labels as guided in the other answer.
Another option is to define a template for cluster name.
receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#lolcats'
send_resolved: true
text: '{{ template "slack.default-alert.text" . }}'
- name: 'null'
templates:
- '/etc/alertmanager/config/*.tmpl'
templateFiles:
template_1.tmpl: |-
{{ define "cluster" }}static-cluster-name{{ end }}
{{ define "slack.default-alert.text" }}
{{- $root := . -}}
... your template ...
{{ template "cluster" $root }}
{{ end }}
{{ end }}
In place of a static-cluster-name
text, you can use a placeholder from Helm if you're using the prometheus-stack helm chart:
{{ define "cluster" }}{{ .ExternalURL | reReplaceAll ".*alertmanager\\.(.*)" "$1" }}{{ end }}
The latter will read the Helm value .ExternalURL
defined earlier in the values.yaml. It's of course possible to also use a placeholder of whatever tool you're using to generate these manifests.