I am new to using https://k6.io for load testing and was wondering about the behavior of the sleep
function.
Right now I have built a load test with multiple stages. In my default function, I have a number of requests (get and post) that get executed.
I noticed that if I just execute these in sequence for a few minutes, my app which is running on elastic beanstalk begins to slow down and eventually throwing 500 errors.
However, if I add a sleep
statement after each request like so:
const getMe = http.get(`${appEndpoint}/me`, params)
check(getMe, {
'me: status was 200': r => r.status == 200,
'me: response time OK (under 500ms)': r => r.timings.duration < maxResponseTimeMs,
});
sleep(Math.floor(Math.random() * 4) + 1)
// next request would follow below
Then I can easily increase the number of virtual users by 10x without any issues.
So my question is this:
Does sleep
cause k6 to pause ALL requests for all virtual users for that amount of time OR does it just pause the requests from one virtual user at a time.
As I understand it, virtual users essentially are just parallel executions of the default
function that runs your load test, so does the entire function pause for all users or does it do this on a per user basis.
Could not find any info about this in the docs so any pointers would be appreciated!
Thanks
Does
sleep
cause k6 to pause ALL requests for all virtual users for that amount of time OR does it just pause the requests from one virtual user at a time.
Your intuition is correct: since VUs execute the default
function in parallel and in isolation from one another, a sleep()
call will pause execution for only that VU.
Since you're sleeping randomly for 1-4s between iterations, your server is likely able to cope with the randomized amount of traffic, whereas sending requests as fast as your tester machine can dispatch them will result in slow downs and 500 errors. During testing you will discover what the right balance for your system is, and what works best for you.
This sleep()
-ing technique is useful for precisely this reason, so you can control the amount of requests sent. Also take a look at the --rps
option.