I have a job in my pipeline that has a script with two very important steps:
mvn test
to run JUnit tests against my codejunit2html
to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifactdocker rm
to destroy a container created earlier in the pipelineMy problem is that when my tests fail, the script stops immediately at mvn test
, so the junit2html
step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm
is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml
file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true
flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments: If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi