I have testcases like below:
class TestSomething():
def test_a(self):
...
def test_b(self):
...
It was working well. But now I have to change some environment setting, reboot the machine to make it effective, and re-test these cases again.
All I can think of is:
class TestSomething():
def test_a(self):
...
def test_b(self):
...
# change the env var on the target machine and reboot it
def test_change_env_and_reboot(self):
some_env_var = get_env_var()
if not some_env_var:
set_env_var()
reboot()
def test_a1(self):
self.test_a()
def test_b1(self):
self.test_b()
It doesn't look too good. Is there a better way to achieve the same purpose, without changing the original existing testcases?
Update:
The purpose is to detect some_env_var
, set it and reboot the machine if it is not True
. After that, re-run test_a
and test_b
. On the other hand, if some_env_var
is already set, do not re-run these testcases.
You can just derive from your first test class and add a setup method (I used setup_class
which is called once, if needed you could use setup_method
, which is called before each test). If you run pytest with this file:
import os
class TestSomething:
def test_a(self):
pass
def test_b(self):
pass
class TestSomethingElse(TestSomething):
@classmethod
def setup_class(cls):
if os.getenv('VAR') == '1':
pytest.skip("Test not needed")
else:
print('Changing configuration...')
You get:
collecting ... collected 4 items
test_setup.py::TestSomething::test_a PASSED [ 25%]
test_setup.py::TestSomething::test_b PASSED [ 50%]
test_setup.py::TestSomethingElse::test_a Changing configuration...
PASSED [ 75%]
test_setup.py::TestSomethingElse::test_b PASSED [100%]
========================== 4 passed in 0.07 seconds ===========================
As you can see, each test is run again after the changed setup has been executed.
If you don't need the test (here detected by the value of an environment variable), you can skip it directly in the setup. In this case your output would be:
collecting ... collected 4 items
test_setup.py::TestSomething::test_a PASSED [ 25%]
test_setup.py::TestSomething::test_b PASSED [ 50%]
test_setup.py::TestSomethingElse::test_a SKIPPED [ 75%]
Skipped: Test not needed
test_setup.py::TestSomethingElse::test_b SKIPPED [100%]
Skipped: Test not needed