I am doing TDD for a project using PHP. Until now, I write unit tests, make them fail and then write the least amount of code to fulfill the test. After the project has been completed, I write acceptance testing using CasperJS.
Of late I have been looking into Codeception and Behat and some other test frameworks and have been reading about different types of tests like Unit Testing, Integration Testing et al.
Nowhere could I find the correct order of testing.
What I want to know is when I sit down to design the project, I do:
While this is not exact, but a good indication of how I run my shop. So, where do integration testing and behavior testing fit in?
This really feels like an opinion based question, so don't be surprised if it gets closed for being such. There really isn't a perfect answer, and deciding how and when to write tests really depends on the project and you.
You could try to work out all the user stories and behaviors and write the acceptance tests before your step 3. This could help illuminate dark corners in the plan.
Or, you could write acceptance tests before starting a feature. This could help to get you in the mindset of what needs to be done with a given feature, its scope, and edge cases.
Or, you could write acceptance tests after the project is finished. This could serve as a final check list of expected behaviors before handing off to the customer for whatever acceptance testing they want to do.
I'm sure there're other points in your workflow where writing acceptance tests might be appropriate, but these are three points where I've found myself writing such tests. IMO, the best place is right before starting a feature. At that point, I have a user story, I'm familiar with the code I've already written, and I have an idea of what the new code is expected to do.
The acceptance tests can be organized to guide coding in the same way unit tests do, but at a broader level. Still iterate through "write failing test, write code to make test pass, write failing test," but also have a larger loop driven by the acceptance tests. Once you get to a point in the inner cycle where you think you'll have a passing acceptance test, check by running the whole suite.
There is another way in which you can ask "where integration and behavior testing fit in," and that's in the sense of "where does that testing fit in with the rest of my testing and code?" This is a little less grey. Unit testing should be run often. The entire unit test suite. Often. So it needs to be incredibly fast. You should be able to know if you broke something internal to your project immediately.
Integration tests are there to verify the ins and outs are working as expected. Outside of your app, your dependencies aren't going to change, and if they do, it should be a big deal that you should be aware of. So there's a clear demarcation between your code and their code. Your unit tests can carry you all the way to that interface. Integration tests verify that the interface you coded for, really is the interface they're providing. You don't need to run these with every little code change. You do need to run them, but maybe only every commit. They can be slower.
Acceptance tests are similar to integration tests, only rather than enlisting an external dependency to verify the interfaces match, they define the interface. You could hold of on running them until near release, but the more often you can run them, the more value they actually provide.
YMMV.