Search code examples
kubernetesgitlabgitlab-ci-runner

How to configure gitlab runner installed on kubernetes through web UI


After connecting a self hosted GitLab to a self hosted Kubernetes cluster and installing the GitLab Runner through the Web UI there is no way to configure it.

I need to add a config to have it run in privileged mode, but so far the documentation I've found has only pointed to config.toml files which I cannot locate on either the GitLab machine or the Kubernetes cluster (another machine) GitLab runner

Runner config is missing any way to configure config.toml Runner config

So how does one configure the runner installed from the Kubernetes Applications tab?

Or am I required to uninstall this and manually add a runner instead and manually configure that?

Additional question: The web UI only creates one pod, how to request more pods?


Solution

  • You will have to use kubectl and SSH into the gitlab runner Pod. There are great articles out there that show you how to gain access to your Kubernetes cluster through kubectl.

    Or you will have to SSH into the node that is running your gitlab runner pod. For this you'll again need kubectl.

    kubectl get ns

    NAME                  STATUS   AGE
    default               Active   11h
    gitlab-managed-apps   Active   10h
    kube-node-lease       Active   11h
    kube-public           Active   11h
    kube-system           Active   11h
    

    kubectl -n gitlab-managed-apps get pod

    NAME                                    READY   STATUS    RESTARTS   AGE
    runner-gitlab-runner-6987ddf6b5-rgjmw   1/1     Running   0          11h
    

    kubectl -n gitlab-managed-apps exec --stdin --tty runner-gitlab-runner-6987ddf6b5-rgjmw -c runner-gitlab-runner -- /bin/bash

    The config.toml file can be located in the following locations in the container runner-gitlab-runner:

    • /etc/gitlab-runner/ on *nix systems when GitLab Runner is executed as root (this is also the path for service configuration)
    • ~/.gitlab-runner/ on *nix systems when GitLab Runner is executed as non-root
    • ./ on other systems

    I found the config.toml at ~/.gitlab-runner/config.toml. Run the following command and it will output your current config file.

    cat ~/.gitlab-runner/config.toml

    listen_address = ":9252"
    concurrent = 4
    check_interval = 3
    log_level = "info"
    
    [session_server]
      session_timeout = 1800
    
    [[runners]]
      name = "runner-gitlab-runner-6987ddf6b5-rgjmw"
      request_concurrency = 1
      url = "https://gitlab.com/"
      token = "******************"
      executor = "kubernetes"
      [runners.custom_build_dir]
      [runners.cache]
        [runners.cache.s3]
        [runners.cache.gcs]
        [runners.cache.azure]
      [runners.kubernetes]
        host = ""
        bearer_token_overwrite_allowed = false
        image = "ubuntu:16.04"
        namespace = "gitlab-managed-apps"
        namespace_overwrite_allowed = ""
        privileged = true
        service_account_overwrite_allowed = ""
        pod_annotations_overwrite_allowed = ""
        [runners.kubernetes.affinity]
        [runners.kubernetes.pod_security_context]
        [runners.kubernetes.volumes]
        [runners.kubernetes.dns_config]
    

    The GitLab Runner does not require a restart when you change most options. This includes parameters in the [[runners]] section and most parameters in the global section, except for listen_address. If a runner was already registered, you don’t need to register it again.

    GitLab Runner checks for configuration modifications every 3 seconds and reloads if necessary. GitLab Runner also reloads the configuration in response to the SIGHUP signal. 2

    If your config.toml is not in the home directory then you'll need to gain root access to the node that is running your pod. You can find the node with the following command

    kubectl -n gitlab-managed-apps get pod runner-gitlab-runner-6987ddf6b5-rgjmw -o yaml | grep nodeName

     nodeName: gke-cluster-1-default-pool-1937372-fhepo
    

    Then use your cloud providers instructions on how to SSH into a node.