I was running load tests and the final statistics showed the number of requests
http_reqs..................: 77
when I executed the test for 100 VUs with 2 iterations. 77
requests is the amount that supposed to be done by a single user so I thought it shows only single VU statistics.
Then I switched to 1000 VUs and 100 iterations this result
http_reqs..................: 3803
Which feels kinda small for 1000 virtual users and 100 iterations and invalidates my thought that it shows the number of requests per virtual user.
When running the test it adds the records to the database. So I expected at least 1000 new records. But it adds only 100 each run.
Also it provides the information that
All iterations (100 in this test run) are shared between all VUs, so some of the 1000 VUs will not execute even a single iteration!
So I presume something does not work?
All the statistics at the end of the test are for the whole duration of the test.
As the message says the iterations
are shared between the VUs ... or in another way the iterations
are how many iterations the whole test will do, not each individual VU. And VUs just execute iterations.
So in your case when you have 77 requests those were actually generated from 2 iterations and when you had 3803 they were generated from 100 iterations. How many VUs were used to make those, is not really relevant.
I guess (because 77 * 50 != 3803) that you are having some logic that will make or not make requests so each iteration, so that they make different amounts of them?
As I mentioned earlier iterations are for the whole test so if you want 100 VUs to make 2 iterations each you need to actually say you want 200 iterations. This also isn't guaranteed, because VUs get iterations as fast as possible. So there is the real possibility that in this example, you will get 1 VU who has done 3 iterations and 1 with 1, or some other variant. This, in general, is not a problem but in a month or two (hopefully) there will be away around this :D