There seem to be much information around about this, but I cannot find a well described use-case with a modern approach (that is, without leaving traces or exposing credentials in the process).
Let's assume I am working from within GitLab group. Inside such a group there are 2 repositories, A and B.
I'd like to build a docker image to the GitLab Container Registry of project B (or even the one of the highest-level group itself).
Project B needs project A, which is only available from its repository, so currently my Dockerfile
looks something like this,
# Set the base image
FROM ubuntu:18.04
# Install git
RUN apt-get -y update && apt-get install -y git
# Clone project
RUN GITLAB_GROUP=group_name git clone git@gitlab.com:$GITLAB_GROUP/A.git # for SSH
RUN GITLAB_GROUP=group_name git clone https://gitlab.com/$GITLAB_GROUP/A.git # for HTTP
[...]
(This of course cannot work as it is since there are no access permissions from within the Dockerfile.)
While the .gitlab-ci.yml
would be at least something like the following,
build image:
image: docker:20.10.17
services:
- docker:20.10.17-dind
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
The most efficient way would be to use gitsubmodules and then set the GIT_SUBMODULE_STRATEGY
variables:
GIT_SUBMODULE_STRATEGY: recursive
That way the runner pulls the repository for you.