Search code examples
unit-testingcode-coverage

Pitfalls of code coverage


I'm looking for real world examples of some bad side effects of code coverage.

I noticed this happening at work recently because of a policy to achieve 100% code coverage. Code quality has been improving for sure but conversely the testers seem to be writing more lax test plans because 'well the code is fully unit tested'. Some logical bugs managed to slip through as a result. They were a REALLY BIG PAIN to debug because 'well the code is fully unit tested'.

I think that was partly because our tool did statement coverage only. Still, it could have been time better spent.

If anyone has other negative side effects of having a code coverage policy please share. I'd like to know what kind of other 'problems' are happening out there in the real-world.

Thanks in advance.

EDIT: Thanks for all the really good responses. There are a few which I would mark as the answer but I can only mark one unfortunately.


Solution

  • In a sentence: Code coverage tells you what you definitely haven't tested, not what you have.

    Part of building a valuable unit test suite is finding the most important, high-risk code and asking hard questions of it. You want to make sure the tough stuff works as a priority. Coverage figures have no notion of the 'importance' of code, nor the quality of tests.

    In my experience, many of the most important tests you will ever write are the tests that barely add any coverage at all (edge cases that add a few extra % here and there, but find loads of bugs).

    The problem with setting hard and (potentially counter-productive) coverage targets is that developers may have to start bending over backwards to test their code. There's making code testable, and then there's just torture. If you hit 100% coverage with great tests then that's fantastic, but in most situations the extra effort is just not worth it.

    Furthermore, people start obsessing/fiddling with numbers rather than focussing on the quality of the tests. I've seen badly written tests that have 90+% coverage, just as I've seen excellent tests that only have 60-70% coverage.

    Again, I tend to look at coverage as an indicator of what definitely hasn't been tested.