Search code examples
pytestparametrize

how to use a pytest function to test different site using a different set of test data for each site such as staging/production


I have a set of pytest functions to test APIs, and test data is in a json file loaded by the pytest.mark.parametrize. Because the staging, production, and pre_production have different data but are similar, I want to save the test data in a different folder and use the same file name, in order to keep the python function clean. Site information is a new option from the command line of pytest. It doesn't work, pytest.mark.parametrize can't get the right folder to collect the test data.

This is in the conftest.py

@pytest.fixture(autouse=True)
def setup(request, site):
    request.cls.site = site
    yield

def pytest_addoption(parser):
    parser.addoption("--site", action="store", default="staging")

@pytest.fixture(scope="session", autouse=True)
def site(request):
    return request.config.getoption("--site")

This is in the test cases file:

@pytest.mark.usefixtures("setup")
class TestAAA:

    @pytest.fixture(autouse=True)
    def class_setup(self):
        self.endpoint = read_data_from_file("endpoint.json")["AAA"][self.site]

        if self.site == "production":
            self.test_data_folder = "SourcesV2/production/"
        else:  // staging
            self.test_data_folder = "SourcesV2/"

        testdata.set_data_folder(self.test_data_folder)

    @pytest.mark.parametrize("test_data", testdata.read_data_from_json_file(r"get_source_information.json"))
    def test_get_source_information(self, test_data):
        request_url = self.endpoint + f"/AAA/sources/{test_data['sourceID']}"
        response = requests.get(request_url)
        print(response)

I can use pytest.skip to skip the test data which is not for the current site.

if test_data["site"] != self.site:
    pytest.skip("this test case is for " + test_data["site"] + ", skiping...")

But it will need to put all the test data in one file for staging/production/pre-production, and there will be a lot of skipped tests in the report, which is not my favorite.

Do you have any idea to solve this? How to pass a different file name to the parametrize according to the site? Or, at least, how to let the skipped test not write logs in the report? Thanks


Solution

  • The parametrize decorator is evaluated at load time, not at run time, so you will not be able to use it directly for this. You need to do the parametrization at runtime instead. This can be done using the pytest_generate_tests hook:

    def pytest_generate_tests(metafunc):
        if "test_data" in metafunc.fixturenames:
            site = metafunc.config.getoption("--site")
            if site == "production":
                test_data_folder = "SourcesV2/production"
            else:
                test_data_folder = "SourcesV2"
            # this is just for illustration, your test data may be loaded differently 
            with open(os.path.join(test_data_folder, "test_data.json")) as f:
                test_data = json.load(f)
            metafunc.parametrize("test_data", test_data)
    
    class TestAAA:
    
        def test_get_source_information(self, test_data):
            ...
    

    If loading the test data is expansive, you could also cache it to avoid reading it for each test.