I have jMeter based performance tests, i.e concurrent user load based results. At the end of the test, Jmeter provides aggregate report, where we can see Avg response time, throughput etc etc. these are fine.
I want to validate these results and get a confidence that the test that I have performed is correct in terms of the configuration i have done like the number of users configured, ramp up time etc.(From the application side, i can confirm that the transactions are actually working, i can trust the jMeter assertions etc, thats not am looking here)..
I see a great article about applying Little's Law in validating the result. But I beleiev its about a stable system in terms of number of users appearing to the server in a steady pattern and maintaining the same load through out the system etc. (please correct me if am wrong here)
but in general, the user concurrency tests would be designed in such a way that the load changes like a stepping pattern as seen in below pic.
In this situation is Little's Law still applicable ? Or there any better mechanisim to validate the result and get confidence on the test performed and results not due to bottlenecks imposed by the testing apparatus.
Thanks
In the absolute majority of cases Little's Law is being applied for Load Testing when you need to come up with a workload pattern standing for anticipated system usage.
For the other testing types like Stress Testing or Spike Testing sticking to the Little's Law doesn't make a lot of sense as workload differs.
With regards to results validation, my expectation is that the business is interested in the answers to the following questions:
Check out Why ‘Normal’ Load Testing Isn’t Enough article for more information on different performance testing subtypes which you might want to apply to your application.