My first question has been answered. Now I am trying to interpret the results based on the given query.
METRIC ACQUISITION:
// globally done
Summary.build()
.name("http_response_time")
.labelNames("method", "handler", "status")
.help("Request completed")
.register();
// done BEFORE every request
final long start = System.nanoTime();
// "start" is saved as a request attribute and lateron read from the request
// done AFTER every request
final double latencyInSeconds =
SimpleTimer.elapsedSecondsFromNanos(start, System.nanoTime());
responseTime.labels(
request.getMethod(),
handlerLabel,
String.valueOf(response.getStatus())
)
.observe(latencyInSeconds);
QUERY:
rate(http_response_time_sum{application="myapp",handler="myHandler", status="200"}[1m])
/
rate(http_response_time_count{application="myapp",handler="myHandler", status="200"}[1m])
RESULT:
0.0020312920780360694
So, what is this? Measured in ns, pushed to summary object in seconds.
As far as I would interpret it, this tells me that all successful requests of the last minute have an average latency of 0.0020 seconds (20ms).
Is that correct?
I will post my results here: the measured/calculated/interpreted value seems to be correct.
Anyway, I would prefer a more detailed and mathematical documentation of the Prometheus methods.