Search code examples
continuous-integrationautomated-testscontinuous-deploymentcontinuous-deliverycontinuous-testing

What kind of tests to use for legacy projects where we've just started using continuous integration and delivery


We have 12 legacy projects. One is an old Visual Basic application programmed 9 years ago, other are C# (.NET) aplications, 2 java projects, and os on.

We've just finished cleaning and creating a repository for each project (some of them were just folders sitting on different computers...).

We have configured Jenkins with many useful plugins, bought two book: Continuous Integration and Continuous Delivery, not fully read yet.

We defined a deployment pipeline for our projects. All are automatically being compiled after a commit to the repository and analysis of code is being done automatically (cyclomatic complexity, etc.).

However, we would like to know if there are tests (easy to add) that we can be using for our projects. We know about unit tests, however, writting unit tests for these projects would be too time consuming (if possible at all).

Are there other kinds of tests we could add or other useful things we could be adding to our pipeline ?

For some of the programs we are automatically generating an installer.

Also, at the end of the pipeline we have a manual step that moves the binary (installer) to a public folder on our apache server where people in the company can easily get the last stable binary (stable here being an application we manually install and test (exploratory test I think it's called) and if we don't see anything wrong, we promote it as a stable release).


Solution

  • I usually apply three levels of tests:

    1. Unit tests - low-level tests that verify the correct behaviour of small, independent units of code. These tests are typically low-level, directly call other code/api's, run fast (during build time) and can break relatively fast too when doing extensive refactoring.
    2. Integration tests - medium-level tests that verify the correct behaviour of a number of units of code together. For example, an API provided by the backend to an external system or to the front-end. These tests are typically not too low-level, operate above code level (http requests, for example), run a bit less fast than unit tests (still during build time) but break less fast since they test against the boundaries of the system (REST endpoints, for example).
    3. End-to-end tests - high level tests that test the system as a whole. For a web application, typically browser testing is used (with Selenium, for example) where a browser is controlled by the tests, connecting to a running instance of the system. These tests are pretty high level (they simulate user behaviour), run slow and not during build time (since the system needs to be deployed first).

    In your case, I'd combine these types of tests. Start by making an automated regression test suite using integration tests and/or end-to-end tests. These types of test can hit a relatively large part of the system with not too much effort. When adding/changing functionality, first write one or more unit tests that verify the current state of the system. Then add/change test cases that verify the desired/new state of the system and change the system accordingly.

    By the way: please reconsider the statement "writing unit tests for these projects would be too time consuming". Yes, it might be time consuming, but not writing tests at all would also be time consuming since you'd probably break functionality all the time without knowing, and find yourself needing to fix lots of issues.