I am using pytest and Visual Studio Code for test-driven development of a Python library.
Many functions have been drafted as stubs but not yet implemented, so they raise a NotImplementedError
. Likewise, development starts from failing tests.
Now I would like to have a better overview of why tests are failing - NotImplementedError
or an "actual" exception. Can this be done?
It's possible to mark tests as expected to fail, so something like this will help distinguish NotImplementedError
s from test actually failing:
import pytest
def func(x):
return x + 1
def func_var(x):
raise NotImplementedError
@pytest.mark.xfail(raises=NotImplementedError)
def test_answer():
assert func(3) == 5
@pytest.mark.xfail(raises=NotImplementedError)
def test_answer_var():
assert func_var(3) == 4
I cannot tell you how it looks in VS, but the console edition is already promising:
$> pytest
============================= test session starts =============================
platform win32 -- Python 3.9.6, pytest-7.1.1, pluggy-1.0.0
rootdir: c:\Projects\Eigene\pytest
plugins: Faker-13.3.5
collected 2 items
test_first.py Fx [100%]
================================== FAILURES ===================================
_________________________________ test_answer _________________________________
@pytest.mark.xfail(raises=NotImplementedError)
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_first.py:11: AssertionError
=========================== short test summary info ===========================
FAILED test_first.py::test_answer - assert 4 == 5
======================== 1 failed, 1 xfailed in 0.14s =========================