I am making a gitlab CI/CD pipeline that uses two different image.
One of them necessitate the installations of some package using npm
. In order to avoid multiple-time installation I've added some cache.
Let's see this example :
stages:
- build
- quality
cache:
paths:
- node_modules/
build-one:
image: node:latest
stage: build
script:
- npm install <some package>
build-two:
image: foo_image:latest
stage: build
script:
- some cmd
quality:
image: node:latest
stage: quality
script:
- <some cmd using the previously installed package>
The fact of having two different docker images
forces me to specify it inside the job definition. So from my tests the cache isn't actually used and the command executed by the quality
job will fail because the package
isn't installed.
Is there a solution to this problem ?
Many thanks ! Kev'.
There can be two cases
Same runner is being used to run all the jobs. In this case the way to specified cache should work fine.
Different runners are being used to run different jobs. So suppose build
job runs with runner 1
and quality
jobs is running with runner 2
so the cache will only be present in runner 1
.
In order to make use of caching in case 2
you will have to use distributed caching.
Then runner 1
will run the build job it will push the cache to s3 and runner 2
will pull with cache during the quality job and then can use that.