Search code examples
tddnunitrequirementsorganization

How to treat future requirements in terms of TDD


While attempting to adopt more TDD practices lately on a project I've run into to a situation regarding tests that cover future requirements which has me curious about how others are solving this problem.

Say for example I'm developing an application called SuperUberReporting and the current release is 1.4. As I'm developing features which are to be included in SuperUberReporting 1.5 I write a test for a new file export feature that will allow exporting report results to a CSV file. While writing that test it occurs to me that the feature to support exports to some other formats are slated for later versions 1.6, 1.7, and 1.9 which are documented in a issue tracking software. Now the question that I'm faced with is whether I should write up tests for these other formats or should I wait until I actually implement those features? This question hits at something a bit more fundamental about TDD which I would like to ask more broadly.

Can/should tests be written up front as soon as requirements are known or should the degree of stability of the requirements somehow determine whether a test should be written or not?

More generally, how far in advance should tests be written? Is it OK to write a test that will fail for two years until the that feature is slated to be implemented? If so then how would one organize their tests to separate tests that are required to pass versus those that are not yet required to pass? I'm currently using NUnit for a .NET project so I don't mind specifics since they may better demonstrate how to accomplish such organization.


Solution

  • If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.

    So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.

    Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.

    Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.