We have a service with 2 replicas deployed in OpenShift. The service uses DropWizard metrics and exposes them through spring-actuator on /actuator/prometheus endpoint. There is an exposed route for actuator's port 8082. Prometheus is configured to scrape from {exposed-service-route}/actuator/prometheus'.
The problem that I'm trying to solve is: when calling the exposed route, as there is a load balancer, it returns metrics from both pods, these metrics have the same names, as the service is the same and then when displaying them in Grafana, the values are not correct. E.g. "processed.messages.count" is 40 when receiving response from the first pod, then changes to 150 when receiving response from the second pod. How can I distinguish these responses and display them correctly? Let's say that for now adding tags to the metrics is not an option.
You don't. The standard approach here is to scrape all containers / targets directly while bypassing the load balancer. Since you have used the openshift
tag, I recommend you to take a look at Prometheus Kubernetes SD.