Search code examples
unit-testingsonarqubesurefire

SonarQube not correctly displays unit test success


First of all: this is not about SonarQube not displaying unit test success. There are plenty of questions & answers for this topic but they weren't helpful.

My setup: using SonarQube 5.2 triggered by Maven 3.x on Jenkins 1.644 with default surefire tests. And that's the result in SonarQube:

SonarQube output

As you can see it displays unit test success for all projects correctly. Selecting a project brings us to package test success. And at this point it's getting strange. For example GUI package: it says 8,3 % project test success but 0,0 % package test success. How is that possible?

I've looked for cause in directory <project dir>/target/surefire-reports on which SonarQube analysis is based on. There you can see three generated .xml-files/classes:

Surefire report files

Some extract from the files shows us:

Test set: xxx.gui.main.AppConfigurationTest
Tests run: 4, Failures: 0, Errors: 4, Skipped: 0

Test set: xxx.gui.main.ConfigurationHolderTest
Tests run: 7, Failures: 0, Errors: 7, Skipped: 0

Test set: xxx.gui.servlet.XxxHttpServletTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

Doing some calculating gives you 4 + 7 + 1 = 12 total tests and 1 / 12 (success/total) = 8,3 %! So that's where project test success is born.

The question is: why isn't class xxx.gui.servlet.XxxHttpServletTest displayed in SonarQube (with test success 100 % of course)? Have you got any idea?


Solution

  • The question is: why isn't class xxx.gui.servlet.J2uHttpServletTest displayed in SonarQube (with test success 100 % of course)? Have you got any idea?

    The purpose of this view is to find test classes and tests packages that have tests in error / failure, and that need to be reviewed / corrected. Therefor, only test classes with success below 100% are displayed.

    Displaying all packages / test classes with 100% success would create visual noise. Is this a good / bad approach ? Well, once you understand this, you know how to interpret this information.

    IMHO, the real question is "what's the point in displaying %age of test success?" Having the number of failed tests per test class / package is a ways better information than %age, as you can use it in order to plan actions.

    Indeed, I don't get why knowing that a test class with 60 tests is "50% broken" (30 tests to fix) is better than another one with 10 tests is "60% broken"... (6 tests to fix.) To me, both are broken and need to be fixed, point. Figures should only help me to define the effort that is required to fix the problem (i.e. 36 tests to fix is cristal clear, not 50%, nor 60%...)

    Side note: tests success %age is an "all or nothing" metric. Having 99.99% tests success is not an option (your integration build should fail, and your code should not go into production at all.)