I have a pytest fixture that takes in some parameters to get some test data stored in a test directory, used to create an object foo
. fixture1
and fixture2
are fixtures defined in conftest.py.
What I would like to do is to test the many components of Foo
for each test defined in the parametrize decorator, e.g. assert foo.do_this() == 1, assert foo.do_something_else() == 2
, but I don't really want to include all these individual asserts in this test_foo_components
function as it does not make sense to do so.
@pytest.mark.parametrize("cid, env, id, r, fixture1, fixture2",
[
(1, 2, 3, 4, None, None),
(2, 3, 4, 5, None, None),
(3, 4, 5, 6, None, None)
],
ids=['test1',
'test2',
'test3'],
indirect=['fixture1', 'fixture2'])
def test_foo_components(cid, env, id, r, fixture1, fixture2):
p = pd.read_parquet(...)
t = pd.read_parquet(...)
q = pd.read_parquet(...)
c = pd.read_parquet(...)
pm = pd.read_parquet(...)
with open(...) as f:
cs = pickle.load(f)
timestamp = 0
foo = Foo(p, t, q, c, pm, r, timestamp, cs, fixture1)
Ideally, I would like to 'fixturize' this foo
object (see code block below), but this foo
object is different on each test case (i.e. it depends on the parametrize inputs), I am not so sure how to write this in a neat way.
def test_foo_do_something(foo_fixture):
assert foo_fixture.do_something() == 1
Update:
Fixtures 1 and 2 are the same for each test case, and there are potentially ~50 different instances of Foo, each of which has components that need to be tested.
This foo_fixture
will be an instantiation of the Foo object, and this Foo instantiation will be unique for each test case, i.e. the majority of their components will be different (test case 1 foo.do_something()
will be different to test case 2 foo.do_something()
). However, assert in the above code block only allows for testing of static methods, but not for instance methods. With this updated info in mind, how would I test instance methods in a similar manner?
You could add additional set of parameters to the test function, each representing a method of Foo
that you want to test and the result you expect. Pytest would create and execute a matrix of all possible combinations.
@pytest.mark.parametrize(
"cid, env, id, r, fixture1, fixture2",
[(1, 2, 3, 4, None, None), (2, 3, 4, 5, None, None), (3, 4, 5, 6, None, None)],
ids=["test1", "test2", "test3"],
indirect=["fixture1", "fixture2"],
)
@pytest.mark.parametrize("method, expected", (("do_something", 1),))
def test_foo_components(cid, env, id, r, fixture1, fixture2):
...
foo = Foo(p, t, q, c, pm, r, timestamp, cs, fixture1)
assert getattr(foo, method)() == expected
Fixtures could also be parametrized so any test that uses them would be run N times depending on the amount of parameters a fixture has. However, in your example you are using indirect fixtures which slightly complicates the things but it would still be possible to use them in a different manner if you rewrite them to work as callables instead of returning data directly.
@pytest.fixture()
def fixture1():
"""
Instead of returning a result, the fixture returns a function
that returns the result when called.
"""
def _(arg):
return ...
return _
@pytest.fixture(
params=[
(1, 2, 3, 4, None, None),
(2, 3, 4, 5, None, None),
(3, 4, 5, 6, None, None),
]
)
def foo(request, fixture1, fixture2):
cid, env, _id, r, f1, f2 = request.param
f1 = fixture1(f1)
f2 = fixture1(f2)
...
return Foo(...)
def test_foo_do_something(foo):
assert foo.do_something() == 1
Note: I personally wouldn't recommend option 1. I think you should strive to reorganise your code and go with option 2 if possible. It's cleaner and it would be easier to further parametrize tests for each method/"component".
If all those 50 different instances of Foo
need to have a certain result defined for calling Foo
's component, then it gets tricky because you somehow need to match the instances with the results. The following example is a simple demonstration where N variations of Foo
need to be matched by N expected results when calling Foo.do_something()
.
params
in @pytest.fixture
define each instance of Foo
and thus control number of variations for each component's test function.@pytest.mark.parametrize
contains only a single parameter of N results for each type of Foo
.component
fixture is what mixes variations of Foo
with the expected result. Index of the params
variation is used to do the matching.@pytest.fixture(params=[(1, 2, 3, 4), (2, 3, 4, 5), (3, 4, 5, 6), ...])
def foo(request):
cid, env, id, r = request.param
...
return Foo(...), request.param_index
@pytest.fixture()
def component(request, foo):
return foo[0], request.param[foo[1]]
@pytest.mark.parametrize("component", ([1, 2, 3, ...],), indirect=True)
def test_foo_do_something(component):
assert component[0].do_something() == component[1]
Note: This can easily get messy and lead to test failures or incorrect assertions. Imagine that adding a new set to params
would require adding a new expected result to each component's test function, and re-ordering of params
would wreak havoc in your tests. So, I think that you would need to come up with a different way of matching Foo
instances with results.