I'm new to test-driven development but have a new application I'm working on. I have some of the basic functionality already created but thought this might be a good chance to start using test-driven development. I'm writing the test requirements to cover the main features using this basic outline (thanks, wikipedia):
And then, when I begin to code the tests, I was going to follow this workflow:
I understand that tests should be independent of each other, but a question arises when writing a test for a specific feature (B) of the application when that feature relies on the functionality of a different feature (A) of the application that is covered by a different test. Is it appropriate for the test of feature B to assume that the functionality of feature A is operational? Or would it be better to manually perform whatever steps feature A does inside the test code for the feature B test? What happens when there is a bug in feature A? That would break both tests and perhaps cause some ambiguity regarding what the issue is. To me it seems like the right choice would be to not allow feature B test to run feature A, but then I might just end up replicating feature A code in feature B test code. Or the test code might become too large or unmaintainable.
If you are using TDD as part of your development process, there are two distinct "running the tests" use cases to think about.
When you are refactoring, you don't typically care how many tests fail when you verify a refactoring. If the count of tests is zero, then you keep going; if the count of tests is not zero, then you can discard your changes, confirm that you are back in a passing state, and try to make the change again with more care.
When you are merging, things get a lot more interesting; we now have many candidate changes that could explain why tests are failing. In that case, precise test failures can help.
If A is testable (in the sense that including the real implementation of A doesn't violate one of our other concerns about the test being fast, reliable, deterministic, etc), and stable (doesn't change often), then it is common that the investment odds favor just using it in the test.
When A is unstable - especially in the case where the observable behavior of A is unstable -- then we may want to consider techniques that would isolate the tests of B from that instability.
The two most common approaches here are to either (a) use a stable substitute/test-double to stand in for the role played by A or (b) to use the real A when we create the expression that will be used to evaluate B.
Consider this trivial example
A(x):
return 2 * x
B(y):
if y:
return A(7)
// ...
If A is stable, we can just ignore it as an implementation detail when we write our tests
assert 14 == B(true)
But an equivalent way of describing the same behavior would be to use the language of A.
assert A(7) == B(true)
Read: B(true) returns the same value as A(7)
Useful reading: James Shore's Testing Without Mocks.