Search code examples
pythonpytestparametrized-testing

pytest stack parametrize decorators with dynamic parameters


i'm new to pytest so please bear with me.
i'm trying to use stack parametrize decorators to test multiple combination permutations, but the question is how can i use values from other parametrize decorators in the stack.

i found the following: but it is not exactly what i'm looking for
stacked parametrize
Using fixtures in pytest.mark.parametrize\

this is what i'm trying to achieve:

@pytest.mark.parametrize("environment", ["main", "develop", "ci"])
@pytest.mark.parametrize("model", get_models())
@pytest.mark.parametrize("id", get_ids(environment, model)) #here i tried to use the returned values of environment and model from the above decorators  
def test_ids(environment, model, id):
    another_method(environment, model, id)
    # some logic here

get_ids() returns a list of ids based on a given environment and model.
this solution doesn't work, since it raising an unresolved reference error for environment and model

the reason that i want to use parametrize decorator is because i need to test all the permutations of environments,models and ids, but want pytest to generate a separate test for each combination.

my current solution is:

@pytest.mark.parametrize("environment", ["main", "develop", "ci"])
@pytest.mark.parametrize("model", get_models())
def test_ids(environment, model):
    ids = get_ids(environment, model)
    for id in ids:
      another_method(environment, model, id)
      # some logic here

this solution works, but each test is very long since it's looping over a long list of ids, i prefer running multiple small tests, rather than fewer tests but very long.
it makes it harder to understand what happen in the tests.
any suggestion?


Solution

  • There is one way I can think of doing this that involves using hooks. It involves using the pytest_generate_tests hook since this allows us to parametrize tests.

    The test_id script I have set up in the following way.

    def get_models():
        return [1,2]
    
    class TestEnvModelIds:
        envs = ["main", "develop", "ci"]
        model = get_models()
    
        def test_ids(self, environment, model, id):
            pass
    

    Take note that we have placed the test inside a class, this is important since we want to access those class attributes from our hook later on.

    The actual magic happens inside the following function which we place inside our conftest.py in the root of our test directory. I have created toy examples for both get_models and get_ids to illustrate that this approach would work. Your actual use-case might slightly differ, as in you might need to import these functions from the project you are actually testing.

    def get_ids(env, model):
        data = {
            "main": {
                1: ["a", "b"],
                2: ["c", "d"]
            },
            "develop": {
                1: ["e", "f"],
                2: ["g", "h"]
            },
            "ci": {
                1: ["i", "j"],
                2: ["k", "l"]
            }
        }
    
        return data[env][model]
    
    def pytest_generate_tests(metafunc):
        if "TestEnvModelIds" == metafunc.cls.__qualname__:
            envs = metafunc.cls.envs
            models = metafunc.cls.model
            argnames = metafunc.fixturenames
            argvalues = []
            
            for env in envs:
                for model in models:
                    ids = get_ids(env, model)
                    for id_ in ids:
                        argvalues.append((env, model, id_))
    
            metafunc.parametrize(argnames, argvalues, scope="class")
    

    What happens in pytest_generate_tests is we iterate over the environments, then the models, and then lastly the ids. We create a list of these triplets and then parametrize our test with them finally.

    When we run the test suite with verbosity we can see that we successfully generated every possible combination of tests as desired.

    ====================================== test session starts =======================================
    platform darwin -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /Users/DmitryPolonskiy/Desktop/so/bin/python3.9
    cachedir: .pytest_cache
    rootdir: ***
    collected 12 items                                                                               
    
    tests/test_models.py::TestEnvModelIds::test_ids[main-1-a] PASSED                           [  8%]
    tests/test_models.py::TestEnvModelIds::test_ids[main-1-b] PASSED                           [ 16%]
    tests/test_models.py::TestEnvModelIds::test_ids[main-2-c] PASSED                           [ 25%]
    tests/test_models.py::TestEnvModelIds::test_ids[main-2-d] PASSED                           [ 33%]
    tests/test_models.py::TestEnvModelIds::test_ids[develop-1-e] PASSED                        [ 41%]
    tests/test_models.py::TestEnvModelIds::test_ids[develop-1-f] PASSED                        [ 50%]
    tests/test_models.py::TestEnvModelIds::test_ids[develop-2-g] PASSED                        [ 58%]
    tests/test_models.py::TestEnvModelIds::test_ids[develop-2-h] PASSED                        [ 66%]
    tests/test_models.py::TestEnvModelIds::test_ids[ci-1-i] PASSED                             [ 75%]
    tests/test_models.py::TestEnvModelIds::test_ids[ci-1-j] PASSED                             [ 83%]
    tests/test_models.py::TestEnvModelIds::test_ids[ci-2-k] PASSED                             [ 91%]
    tests/test_models.py::TestEnvModelIds::test_ids[ci-2-l] PASSED                             [100%]
    
    ======================================= 12 passed in 0.04s =======================================