Search code examples
unit-testingdependency-injectionmockingtddsolid-principles

Should you know your dependencies in advance for each test


I am exploring TDD and SOLID principles. Say I have a service that I create a failing test for before writing the implementation.

public interface IService {

    bool DoSomething();

}

public class Service : IService {

    bool DoSomething() {
    ...
    }

}

[TestMethod]
public void ServiceImplementationTest()
{
    var implementation = new Service();
    bool result = implementation.DoSomething();
    Assert.IsTrue(result);
}

When I first write the test, I'm unaware of the dependencies of this service, so the constructor takes no arguments as I don't need to inject any dependencies.

However, as I write the implementation, I realise I need a certain dependency, and so add a reference to that dependency in the constructor. To keep the test code compiling and failing, I then have to go back to the test, and modify it to create a fake implementation.

public class Service : IService {
    public Service(IDependency dependency) {
        _dependency = dependency;
    }
    bool DoSomething() {
        ... use _dependency ...
        return result;
    }

}

[TestMethod]
public void ServiceImplementationTest()
{
    var implementation = new Service(*** new MockDependency() ***);
    bool result = implementation.DoSomething();
    Assert.IsTrue(result);
}

Is this just a fact of life? Should I know all dependencies before writing the test? What happens when I want to write a new implementation with different dependencies, is it necessary to write a new test for each implementation, even though the correctness of the implementation hasn't changed?


Solution

  • Should you know your dependencies in advance for each test

    Not necessarily, no.

    What you are doing when you design "test first" is that you are exploring a possible API. So the following code

    var implementation = new Service();
    bool result = implementation.DoSomething();
    Assert.IsTrue(result);
    

    says, among other things, that the public API should allow you to create an instance of Service without knowing anything about its dependencies.

    However, as I write the implementation, I realise I need a certain dependency, and so add a reference to that dependency in the constructor. To keep the test code compiling and failing, I then have to go back to the test, and modify it to create a fake implementation.

    So notice two things here

    1. This isn't a backwards compatible change...
    2. Which means it isn't a refactoring

    So part of your problem is that you are introducing the changes you want in a backwards incompatible way. If you were refactoring, you would have a step where your constructors would look something like

    public Service() {
        this(new IDependency() {
            // default IDependencyImplementation here
        });
    }
    
    Service(IDependency dependency) {
        this.dependency = dependency;
    }
    

    At this point, you have two independent decisions

    • should Service(IDependency) be part of the public API

    If so, then you start writing tests that force you to expose that constructor

    • should Service() be deprecated

    and if it should, then you plan to delete the tests that depend on it when you remove Service() from the public API.

    Note: that is a lot of unnecessary steps if you haven't shared/released the public API yet. Unless you are deliberately adopting the discipline that calibrated tests are immutable, it is usually more practical to hack the test to reflect the most recent draft of your API.

    Don't forget to re-calibrate the test after you change it, though; any change to the test should necessarily trigger a refresh of the Red/Green cycle to ensure that the revised test is still measuring what you expect. You should never be publishing a test that you haven't calibrated.

    A pattern that is sometimes useful is to separate the composition of the system under test from the automated checks

    public void ServiceImplementationTest()
    {
        var implementation = new Service();
        check(implementation);
    }
    
    void check(Service implementation) {
        bool result = implementation.DoSomething();
        Assert.IsTrue(result);
    }
    

    Another alternative is to write a spike - a draft of a possible implementation without tests, with purpose of exploring to better understand the constraints of the problem, which is thrown away when the learning exercise is completed.

    Could you quickly describe what you mean by test calibration? Google isn't helping much. Is it making sure that tests fail when they are supposed to?

    Yes: making sure that the fail when they are supposed so, making sure when they do fail that the messages reported are suitable, and of course making sure that they pass when they are supposed to.

    I wrote some about the idea here.