I have an existing set of frontend automated tests that run in python and use pytest. I would like to add an automated test that, when run against a chromium based browser, collect performance metrics.
Assume that browser
is a custom wrapper around RemoteWebdriver
and that I am not able to create a Chrome
driver inside these tests.
import pytest
@pytest.mark.chrome_only
def test_lighthouse_performance(browser):
session = browser.bidi_connection()
# What do I do now?
The example at seleniumHQ mentions aysnc
, but that isn't compatible with my current test framework and those methods don't exist on the RemoteWebdriver
class.
In addition, there isn't any documentation on using the bidi_connection()
method is the remote webdriver. The bidi_api doc page is similarly unhelpful.
I eventually figured this out, but it was a lot of trial and error because the CDP documentation is not very user-friendly on the python side of selenium.
This answer will break down the resources I used to get a working solution, examples of code I pulled from, and end with an annotated snippet of code that encapsulates the solution.
Documentation
Useful code resources
Code snippet
import pytest
# These imports are not strictly necessary, but they help with
# autocompletion/typing in VSCode
import selenium.webdriver.remote.webdriver as RemoteWebdriver
from selenium.webdriver.common.bidi.cdp import CdpSession
import selenium.webdriver.common.devtools.v113 as v113Devtools
# The devtools is versioned, you can find the latest version here: https://github.com/SeleniumHQ/selenium/tree/trunk/common/devtools/chromium
import trio # Selenium uses trio instead of asyncio
@pytest.mark.trio # pytest requires this to run the test with trio
async def test_cdp(browser: RemoteWebdriver): # See imports above for typing
page = SomePageObject(
browser
) # This is a page object I made up, you can use your own (or not)
async with browser.bidi_connection() as connection:
session: CdpSession = connection.session
devtools: v113Devtools = connection.devtools
data = {"traceEvents": []}
trace_config = (
devtools.tracing.TraceConfig()
) # See documentation links for info on this class
# Report events is the default transfer mode, but I'm including it here for clarity
# There is also a "ReturnAsStream" mode, but unpacking the stream handler it returns
# is beyond the scope of this example (and I don't know how to do it yet)
await session.execute(
devtools.tracing.start(
transfer_mode="ReportEvents", trace_config=trace_config
)
)
# it's important to start the tracing and THEN navigate to the page
page.navigate()
session.execute(
devtools.tracing.end()
) # This is the only way to stop the tracing and start the event processing
async for dc_event in session.listen(
devtools.tracing.DataCollected, devtools.tracing.TracingComplete
):
# The DataCollected event is sent for each batch of events, and the TracingComplete event is sent when the tracing is done
if type(dc_event) == devtools.tracing.DataCollected:
data[
"traceEvents"
] += (
dc_event.value
) # Collect the events and do whatever you want with them
if type(dc_event) == devtools.tracing.TracingComplete:
break # This is how you stop the event loop
# Do whatever you want with the data at this point
Notes
You can use the CDP protocol monitor to find how the devtools configures all of these calls and to find out which events (if you need different ones than I've given an example of here) you should be looking for.
If you're willing/able to use a node-cli as part of your testing process, Lighthouse (chrome performance benchmarking tool that utilizes a lot of the same CDP metrics) is available as a node-cli.