Search code examples
pythonpytestfreezefixtures

pytest: full cleanup between tests


In a module, I have two tests:

@pytest.fixture
def myfixture(request):
    prepare_stuff()
    yield 1
    clean_stuff()
    # time.sleep(10) # in doubt, I tried that, did not help

def test_1(myfixture):
    a = somecode()
    assert a==1

def test_2(myfixture):
    b = somecode()
    assert b==1

case 1

When these two tests are executed individually, all is ok, i.e. both

pytest ./test_module.py:test_1

and immediately after:

pytest ./test_module.py:test_2

run until completion and pass with success.

case 2

But:

pytest ./test_module.py -k "test_1 or test_2"

reports:

collected 2 items
test_module.py .

and hangs forever (after investigation: test_1 completed successfully, but the second call to prepare_stuff hangs).

question

In my specific setup prepare_stuff, clean_stuff and somecode are quite evolved, i.e. they create and delete some shared memory segments, which when done wrong can results in some hanging. So some issue here is possible.

But my question is: are there things occurring between two calls of pytest (case 1) that do not occur between the call of test_1 and test_2 from the same "pytest process" (case 2), which could explain why "case 1" works ok while "case 2" hangs between test_1 and test_2 ? If so, is there a way to "force" the same "cleanup" to occur between test_1 and test_2 for "case 2" ?

Note: I already tried to specify the scope of "myfixture" to "function", and also double checked that "clean_stuff" is called after "test_1", even in "case 2".


Solution

  • Likely something is happening in your prepare_stuff and/or clean_stuff functions. When you run tests as:

    pytest ./test_module.py -k "test_1 or test_2"

    they are running in the same execution context, same process, etc. So, if, for example, clean_stuff doesn't do a proper clean up then execution of a next test can fail. When you run tests as:

    pytest ./test_module.py:test_1
    pytest ./test_module.py:test_2
    

    they are running in different execution contexts, i.e. they start in an absolutely clean environment, and, unless you're modifying some external resources, you can easily remove clean_stuff in this case and they will pass anyway.

    To rule out pytest issue just try to run:

        prepare_stuff()
    
        a = somecode()
        assert a==1
    
        clean_stuff()
    
        prepare_stuff()
    
        b = somecode()
        assert b==1
    
        clean_stuff()
    

    I'm pretty sure you'll have the same problem, which would confirm that the issue is in your code, but not in the pytest.