Search code examples
unit-testingtestinggoogletestunittest++catch-unit-test

Dealing with optional tests


The absence of a way to skip a test in CATCH, Google Test and other frameworks (at least in the traditional sense, where you specify the reason for doing so and see it in the output) made me think if I need it at all (I've been using UnitTest++ in my past projects).

Normally, yeah, there shouldn't be any reason to skip anything in a desktop app - you either test it or not. But when it comes to hardware - some things can't be guaranteed.

For example, I have two devices: one comes with an embedded beeper, but the other - without. In UnitTest++ I would query the system, find out that the beeper is not available, and would just skip the tests, which depend on it. In CATCH, of course, I can do something similar: query the system during the initialization, and then just exclude all tests with the tag "beeper" (a special feature in CATCH).

However, there's a slight difference: a tester (someone other than me) would read the output and not find those optional tests mentioned (whereas in UnitTest++ they'd be marked as skipped, and the reason would be provided as a part of the output). His first thoughts:

  • This must be some old version of the testing app.
  • Maybe I forgot to enable suite X.
  • Something is probably broken, I should ask the developer.
  • Wait, maybe they were just skipped. But why? I'll ask the developer anyway.

Moreover, he could just NOT notice that those tests were skipped, while they might actually shouldn't be (i.e. the OS returns "false", regardless of the beeper being/not being there, which is a major bug). One option would be to mark "skipped" tests as passed, but that feels like an unnecessary workaround.

Is there some clever technique I'm not aware of (i.e., I don't know, separating the optional tests into a standalone program altogether)? If not - should I stick to UnitTest++ then? It does the job, but I really like CATCH's SECTIONs and tags, helps in avoiding code repetition.


Solution

  • If you're detecting the availability of the beeper programmatically then you have a place to also print out the tests you're skipping.

    You can get the set of tests that match a given test spec with something like the following:

      std::vector<TestCase> matchedTestCases;
      getRegistryHub().getTestCaseRegistry().getFilteredTests( testSpec, config, matchedTestCases );
    

    testSpec is an instance of TestSpec. You can get the current one from config.testSpec() - or you can create it on the fly (which you may need to do if you're programmatically filtering tests. This isn't really documented at the moment as I had wanted to go back over the whole test spec thing and rework it. As it happens I did that last week. Hopefully this should be fairly stable now - but I'm letting it settle in before committing it documentation.

    You should be able to work it out if you search for "class TestSpec" in the code -although you may find it easier to parse it out of a string using parseTestSpec().

    You can get the config object with getCurrentContext().getConfig().