Search code examples
unit-testingtddacceptance-testingend-to-end

Technique for TDD testing cycles differentiating types of test?


A newbie in this art... but so far, from my reading, I understand that there are broadly 3 categories: unit tests, acceptance/integration tests (not the same) and end-to-end tests.

The thing is, of these 3, it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project, all the time during development. But the same, it seems, can't be said of the other types.

It seems to me, therefore, that you'd want to be running a single acceptance test (or maybe a group of related ones) at each test run, while running all the unit tests for the whole project.

As for the latest end-to-end test that is in the "red" state, given that these can be even slower than acceptance tests, mightn't you want to run that only intermittently? And the entire end-to-end collection maybe only when you're doing something else, or at night or sthg?

I'm using Gradle, and I'm aware you can create a special test task to only run, for example, all the unit tests under a tests\unittests directory... but, if my thinking is valid, is there a habitual way of skipping, or selecting, particular acceptance tests, other than by constantly editing the code - which can get pretty tiresome?

For example, by somehow tagging particular acceptance or end-to-end tests as a certain "category", or maybe by arranging these tests in a hierarchical folder structure?


Solution

  • I have not used gradle, but in python I regularly use both ways you described:

    • tagging of specific classes of functional tests (a subset are usually tagged as "smoke" tests, to be run on each deploy)
    • representing tests in hierarchies
      • small/unit
      • integration
      • function (smoke are usually tagged functional tests)
      • ui
      • e2e

    it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project,

    This is the goal, all unit tests are encouraged to be IO free, to run lighting-fast on ever single commit. This process is usually codifed with CI build jobs to trigger on every commit to a repo.

    But the same, it seems, can't be said of the other types. It really depends on what an acceptable build time is, and the size of your projects. I have found that most projects don't actually have that many integration, and if they do have an excessive number of integration, it is usually a good indication that the service should be rethought. For ever integration how many tests are necessary to protect against difficult to reproduce error cases, and to make sure their are checks that will break on interface changes?? In my experience, not many. I have recently started to use docker-compose for integration tests, which allows many tests 20-30 to be executed very quickly for every commit.

    docker-compose also allows for a clean e2e environment to be brought up to have acceptance/functional tests executed against it.

    It is also my experience that the higher level tests are executed less frequently, but should be executed as frequently as they can be. For example I work with an API, with 300 functional tests covering every method on every endpoint. Because they don't interact with a UI and only use HTTP, they take about a minute to execute. They are executed on every deploy to an environment and at regular intervals.