I have a ready-made Kubernetes cluster with configured grafana + prometheus(operator) monitoring. I added the following labels to pods with my app:
prometheus.io/scrape: "true"
prometheus.io/path: "/my/app/metrics"
prometheus.io/port: "80"
But metrics don't get into Prometheus. However, prometheus has all the default Kubernetes metrics.
What is the problem?
You should create ServiceMonitor
or PodMonitor
objects.
ServiceMonitor
which describes the set of targets to be monitored by Prometheus. The Operator automatically generates Prometheus scrape configuration based on the definition and the targets will have the IPs of all the pods behind the service.
Example:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
PodMonitor
, which declaratively specifies how groups of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web