Search code examples
automated-testsfitnesseacceptance-testing

Failed automated tests: how to distinguish known and newly introduced bugs?


Use case: Fitnesse is used for automated testing of the web site.

SUT (software under test) contains a known bug. Say, we expect that web page contains "Changes saved successfully" string but this string is missing because of the bug. So in Fitnesse, this test case is marked as red.

Suppose, in another test case we expect that web page contains "A user created successfully" string. And it worked just fine until the last test execution. So, now this test case is also marked as red.

So, now we have red light for two test cases: a well known bug and a newly found bug. The problem is they are both marked as red. So when I have a look at test results I can't distinguish which of them are known and new.

Of course, I can compare test history and see the difference between two runnings (with and without a newly created bug).

Or I may not execute a test case with a known bug.

Or I can tune it so that this test case has always been green and change it when the bug is fixed.

But all this is very inconvenient. What I want is to differ two kinds of bugs (a well known bug and new bug) so that:

  1. By looking at test results, I could easily say: this is a new bug and those are old. For example: No bugs - green, Already known bugs - yellow, New bugs - red.

  2. It is easy to change test case when the bug is fixed.

What are the best strategies for acceptance tests, in general, and Fitnesse, in particular?


Solution

  • There's a subtle distinction here: you're talking about tracking test state, not just whether it's a known bug. Good CI systems can help you track a test's state via history, and let you know when it changes state. (Passed yesterday, fails today.) Good CI systems can also resolve false failures so they don't muddy up your history. (I'm thinking specifically of Team City which I've done this in.)

    Having a bug against a failing test is another issue. Naming conventions can help, as Barry mentioned. I've also used test framework metadata to help identify existing bugs by marking descriptions in test attributes or properties.