Is there a way to show the latency statistics (and hopefully graphs) of the internal computations of a service in the locust web interface during the test?
I have a service which internally realizes several computations. I would need to perform a load test and benchmark the times of each of these internal computations. Something like:
/compute
:
However in locust I can only see the overall time statistics of the endpoint (/compute
in this case).
I am right now returning the latencies of each of the computations in the response. I have checked the docs but I have not found a way to do show the statistics of those numbers during the test in the locust web interface.
The only workaround I have found is saving the responses into a file and computing the statistics separately.
Is there any way to do it? Or any other better solution?
Thank you so much in advance
A quick workaround might be firing your own request event for each computation so they show as different endpoints in de web UI.
request_meta = {
"request_type": "my-custom-type",
"name": "my-custom-name",
"response_time": <calculated-time>,
"response_length": 0,
"exception": None,
"context": None,
"response": None,
}
env.events.request.fire(**request_meta)
Writing a wrapper might result in a cleaner code too.