Search code examples
jmeterperformance-testingload-testingkpi

JMeter Performance Metrices for Load Test


What are the Performance Metrices or KPI indicators such as Response Rate, Throughput, Hits per second etc that need to be considered when doing a Load test for a commercial Web application using Apache JMeter to prove that the application under test is stable or unstable under a given load of users or transactions?


Solution

  • The KPIs which JMeter measures are listed and described under JMeter Glossary.

    The main ones are:

    Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.

    Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.

    Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.

    Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.

    The formula is: Throughput = (number of requests) / (total time).

    An important one which is not listed is whether the request was successful or not.

    If you correlate them with the number of active threads (virtual users) you will see the impact of the increasing load onto other metrics.

    For example you executed the test with the anticipated number of users and generated HTML Reporting Dashboard. Ideally response time should be the same and the number of transactions per second should grow at the same factor as the number of users grows. It indicates that the system under test is "stable" (whatever it means in your world)

    At some point most probably you will see that despite the fact you're increasing the load the throughput doesn't increase and response time starts increasing or errors start occurring. It means that the system under test is not "stable" anymore and you've just passed the saturation point.

    More information: How to Do Load Testing