Search code examples
gitlabgitlab-cigitlab-ci-runner

Run specific jobs on runners with specific tags


Context

I'm trying to learn automated testing on each commit/push/merge, and started to explore Gitlab CI. I installed Gitlab Runner on my MacBook. I registered two runners, one with executor as shell and the other with kubernetes.

I am trying to setup a Gitlab CI which works on both of these runners (or of runners of same types installed on machines of my colleagues).

Problem Description

I've set up the pipeline following the official tutorials. The pipeline has a build stage, few orchestration stages basically doing a smoke testing, and a final deploy stage. The orchestration stages are runner independent, but build steps are different for kubernetes and shell runners, as former needs a image building and the latter needs virtual environment setup after wheel building. Separately both the runners work (if I comment jobs which are not applicable to that runner), but I want both of them to be able to work with same configuration file so that availability of runners is less of an issue, but can't seem to make it work.

This is a dummy representation of what I created:

...  # default section

build_job_kubernetes:
    stage: build
    image:
        name: gcr.io/kaniko-project/executor:debug
        entrypoint: [""]
    script: ...  # commands
    rules:
        - if: '$CI_RUNNER_TAGS =~ /*kubernetes*/'
    tags:
        - kubernetes-executor

build_job_shell:
    stage: build
    script: ...  # commands
    rules:
        - if: '$CI_RUNNER_TAGS =~ /*shell*/'
    tags:
        - shell-executor

... # other sections

If I try to run the pipeline when both runner is available, it completely ignores these two jobs. And if I remove the rules, both jobs start executing on different runners, which is unnecessary. In case one of the runners is paused, it do not proceed to the next stage saying it's stuck, even though I've specified that the needs is optional.

Question

Why do not the regex conditions I wrote work? It need not be with these rules only, but I'd like to have a configuration where only jobs required will be run, and it will not get stuck if runner of a different type is missing.


I can think of an option where I'll essentially have two independent set of pipelines, one for each type of runner, and these two will run in parallel. I'd like to avoid such scenario as it's not needed. I want my pipeline to pick up the runner based on availability. Obviously it has to be ensured that it doesn't happen that orchestration stages rin on shell runner after running build stage on kubernetes, as then it'd possibly run on outdated repository. Compared to this, having two parallel pipelines is certainly a better option, but I'd like avoid it if possible.


Solution

  • The reason why the rules: condition on $CI_RUNNER_TAGS does not work is because rules: are evaluated at the time that the pipeline is created, and simply determines whether the job is included in the created pipeline. Because runners can only pick up jobs after they are created, this condition cannot be evaluated.

    want my pipeline to pick up the runner based on availability

    Unfortunately, the only mechanism by which you can control the runner used is the job tags. Pipelines can't pick their runners, partly because runners receive jobs through a "pull" mechanism -- all runners periodically poll GitLab for available jobs (matching their own tags, if applicable) -- runners choose jobs, not the other way around. So this wouldn't be possible.

    What you might be able to do, however, is assign both runners to your project as shared runners, allowing them to pick up untagged jobs (or use a shared tag). You can modify the script logic based on the runner that does pick up the job.

    For example, you might use the value of $CI_RUNNER_TAGS in your script: to determine what the job should do, based on which runner is running the job. Then you can have just one job instead of two.

    my_job:
      script: |
        if [[ "${CI_RUNNER_TAGS}" == "kubernetes-executor" ]]; then
            ./run_kube_build.sh
        else
            ./run_shell_build.sh
        fi