I want to measure the page load time of a web application with a maximum of 10 concurrent virtual users doing actions on different website pages. I am concerned that the web app pages rendering time will be considerably more than simply what is measured in JMeter.
I am thinking of making a JMeter script using:
HTTP Request Defaults with the "Retrieve All Embedded Resources" checkbox as well as the parallel downloads checkbox - to fetch embedded resources like scripts, styles, etc. from the pages
HTTP Cookie Manager - for representing browser cookies, enabling cookie-based authentication and maintaining sessions.
HTTP Cache Manager - for simulating resources being returned from the browser's cache on subsequent requests of a same logged-in virtual user
HTTP Header Manager - to represent browser headers like User-Agent, Content-Type, encoding, etc.
However, I think this is the closest I can get to the real page load time using purely JMeter by grouping requests under Transaction Controllers to determine a Page load time.
As I understand the problem, using only JMeter, a Page Load would be the sum of:
But, by simulating a browser, a page load would be a sum of:
Rendering time would include things such as:
In order to get a feel of this rendering time, I would like to know if someone with more experience thinks it would be a good practice to configure only 9 concurrent users to make different scripted navigations and interactions with the website and then have another 1 virtual user in another Thread Group do scripted navigations but using the Selenium Webdriver plugin for JMeter.
Or, since it's a small number should I configure all users to run through the Selenium Webdriver plugin and spawn 10 browsers on the same load generator? (with this approach I am more concerned about UI tests flakiness or instability and with the load incurred on the load generator machine)
What do you think? Any opinion is greatly appreciated.
There are 2 separate actions: