Search code examples
unit-testingtddagilesprint

Given a short (2-week) sprint, is it ever acceptable to forgo TDD to "get things done"?


Given a short sprint, is it ever acceptable to forgo TDD to "get things done" within the sprint.

For example a given piece of work might need say 1/3 of the sprint to design the object model around an existing implementation. Under this scenario you might well end up with implemented code, say half way through the sprint, without any tests (implementing unit tests during this "design" stage would add significant effort and the tests would likely be thrown away a few times until the final "design" is settled upon).

You might then spend a day or two in the second week adding in unit / integration tests after the fact.

Is this acceptable?


Solution

  • A 2 week iteration isn't short for a lot of people. Many of us are doing one week iterations. Kent Beck is even trying to encourage daily deployments - and there are advantages in cleaning the dev process up so it can be that responsive.

    NEVER reduce TDD quality to get stuff out - It's so much harder to clean up later and you just end up teaching the customer that they can pressure you into quick, dirty, hacked releases. They don't see the crap code that gets produced as a result - and they don't get to maintain it. If somebody tried to get me to do that I'd quit... and I have refused to work in places that "don't have time to test properly". That's not an excuse that works.

    NOTE: When I write about TDD, I'm including functional tests. These are important because they should exercise scenarios that make sense to the customer in terms of recognizable user-stories. I normally start work on a story with the functional test because it's the most important test - "test customer gets what they described..." All the other tests might be up for negotiation, but when I'm team leading I expect at least one functional test per story or it's (as scrum people say) "not done!" ;-)

    Don't think you can go in and add tests later - it's so much more difficult to do that. (I have tried it both ways - believe me.) It really is cheaper to put tests in as you go - even if you have to refactor and rewrite, or throw some away as the system evolves.

    You can't get quality code without having decent code coverage ALL the time.

    Code test coverage is the important word here. Covering stuff that could break, not just zillions of meaningless tests - critical tests that cover things you need to worry about.

    If you can't get it out in time and it's a problem you need to think why?

    • Is it a planning/release scheduling problem?
    • Is there a code maintenance/refactoring problem that holds things up?
    • Are there people putting unreasonable pressure on the team? (get the CTO to beat them up...)