Search code examples
javascriptunit-testingcontinuous-integrationtddintegration-testing

How to use TDD right


So I have a new assignment in university consisting of lots of people collaborating, and we want to use continuous integration, thinking of using CircleCI, and we want to use a TDD approach.

My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?

In this case since using CircleCI, assuming it won't let me merge code if it doesn't pass the tests, how can this work? Since there will be tests written but no code for that test specifically. Am I wrong and you write the tests as you go along on the development of the features? This is a subject that I am really having a hard time grasping but I would really love to understand it right as I believe it will really help on the future.


Solution

  • My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?

    Not quite the right idea.

    You might start by thinking about the problem, and creating a checklist of tests that you expect to implement before you are done.

    But the actual implementation cycle is incremental. We work on one test at a time, starting from the first. We make that test pass, and clean up all of the code, before we introduce a second test.

    The idea here being that we'll be learning as we go -- we may think of some more tests, which get added to the checklist, or we may decide the tests we thought would be important aren't after all, so they get crossed off the checklist.

    At any given point in time, we expect that either (a) all of the implemented tests are passing, or (b) exactly one implemented test is failing, and it is the one we are currently working on. Any time we discover some other condition holds, then we back up, reverting to some previously well understood state, and then proceed forwards again.

    We don't normally push/publish/share code when it has broken tests. Instead, the test and a working implementation are shared together. We don't share the broken intermediate stages, or known mistakes; instead, we share progress.

    A review of the slides in the Bowling Game Kata may help to clarify what the rhythm of the work looks like.

    It is completely normal to feel like the first test is hard -- you are writing a test implementation against code that doesn't exist yet. We tend to employ imagination here; suppose that the production code you need already exists, how would you invoke it? what data would you pass to it? What data would you get back? and you write the test as though the perfect interface for what you want to do already exists. Then you create production code that matches that interface; then you give that production code the correct behavior; then you give the production code a design that will make the code easy to change later.

    And when you are happy with all of that, you introduce the second test, which usually looks like the first test with slightly different data, and a different expected result. So the second test fails, and then you go to the easy-to-change code you wrote before, and adapt it so that the second test also passes. And then you again clean up the design so that the code is easily changed.

    And so it goes, until you reach the end of your checklist.