We have a AWS EKS running (setup using Pulumi), where we installed Tekton as described in the Cloud Native Buildpacks Tekton docs. The example project is available.
Our Tekton pipeline is configured like this (which is derived from the Cloud Native Buildpacks Tekton docs also):
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: IMAGE
type: string
description: image URL to push
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: source-workspace # Directory where application source is located. (REQUIRED)
- name: cache-workspace # Directory where cache is stored (OPTIONAL)
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task you installed
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: buildpacks # This task uses the `buildpacks` task to build the application
taskRef:
name: buildpacks
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: source-workspace
- name: cache
workspace: cache-workspace
params:
- name: APP_IMAGE
value: "$(params.IMAGE)"
- name: BUILDER_IMAGE
value: paketobuildpacks/builder:base # This is the builder we want the task to use (REQUIRED)
We added SOURCE_URL
and SOURCE_REVISION
as parameters already.
The question is: How can we trigger a Tekton PipelineRun
from GitLab CI (inside our .gitlab-ci.yml
) adhering to the following requirements:
TLDR;
I created a fully comprehensible example project showing all necessary steps and running pipelines here: https://gitlab.com/jonashackt/microservice-api-spring-boot/ with the full .gitlab-ci.yml
to directly trigger a Tekton Pipeline:
image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
stages:
- build
build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
- echo "--- Trigger PipelineRun in Tekton / K8s"
- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
- echo "--- Show Tekton PipelineRun logs"
- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
- echo "--- Check if Tekton PipelineRun Failed & exit GitLab Pipeline accordingly"
- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
Here are the brief steps you need to do:
1. Choose a base image for your .gitlab-ci.yml
providing aws
CLI, kubectl
and Tekton CLI (tkn
)
This is entirely up to you. I created an example project https://gitlab.com/jonashackt/aws-kubectl-tkn which provides an image, which is based on the official https://hub.docker.com/r/amazon/aws-cli image and is accessible via registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
.
2. CI/CD Variables for aws CLI & Kubernetes cluster access
Inside your GitLab CI project (or better: inside the group, where your GitLab CI project resides in) you need to create AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
as CI/CD Variables holding the the aws cli credentials (beware to mask
them while creating them in order to prevent them beeing printed into the GitLab CI logs). Depending on your EKS clusters (or other K8s clusters) config, you need to provide a kubeconfig
to access your cluster. One way is to create a GitLab CI/CD variable like EKSKUBECONFIG
providing the necessary file (e.g. in the example project this is provided by Pulumi with pulumi stack output kubeconfig > kubeconfig
). In this setup using Pulumi there are no secret credentials inside the kubeconfig
so the variable doesn't need to be masked. But be aware of possible credentials here and protect them accordingly if needed.
Also define AWS_DEFAULT_REGION
containing your EKS cluster's region:
# As we need kubectl, aws & tkn CLI we use https://gitlab.com/jonashackt/aws-kubectl-tkn
image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
3. Use kubeconfig
and testdrive cluster connection in before_script
section
Preparing things we need later inside other steps could be done inside the before_script
section. So let's create the directory ~/.kube
there and create the file ~/.kube/config
from the contents of the variable EKSKUBECONFIG
. Finally fire a kubectl get nodes
to check if the cluster connection is working. Our before_script
section now looks like this:
before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
4. Pass parameters to Tekton PipelineRun
Passing parameters via kubectl
isn't trivial - or even needs to be done using a templating engine like Helm. But luckily the Tekton CLI has something for us: tkn pipeline start
accepts parameters. So we can transform the Cloud Native Buildpacks Tekton PipelineRun Yaml file into a tkn
CLI command like this:
tkn pipeline start buildpacks-test-pipeline \
--serviceaccount buildpacks-service-account-gitlab \
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc \
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc \
--param IMAGE=registry.gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_URL=https://gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_REVISION=main \
--timeout 240s \
--showlog
Now here are some points to consider. First the name buildpacks-test-pipeline
right after the tkn pipeline start
works as an equivalent to the yaml files spec: pipelineRef: name: buildpacks-test-pipeline
definition.
It will also work as a reference to the Pipeline
object defined inside the file pipeline.yml which starts with metadata: name: buildpacks-test-pipeline
like:
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: buildpacks-test-pipeline ...
Second to define workspaces isn't trivial. Luckily there's help. We can define a workspace in tkn
CLI like this: --workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
.
Third using the parameters as intended now becomes easy. Simply use --param
accordingly. We also use --showlog
to directly stream the Tekton logs into the commandline (or GitLab CI!) together with --timeout
.
Finally using GitLab CI Predefined variables our .gitlab-ci.yml
's build stage looks like this:
build-image:
stage: build
script:
- echo "--- Run Tekton Pipeline"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--timeout 240s
--showlog
5. Solve the every GitLab CI Pipeline is green problem
This could have been everything we need to do. But: right now every GitLab CI Pipeline is green, regardless of the Tekton Pipeline's status.
Therefore we remove --showlog
and --timeout
again, but add a --dry-run
together with the --output yaml
flags. Without the --dry-run
the tkn pipeline start
command would create a PipelineRun
object definition already, which we can't create then using kubectl
anymore:
build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
Now that we removed --showlog
and don't start an actual Tekton pipeline using tkn
CLI, we need to create the pipeline run using:
- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
Having the temporary variable PIPELINE_RUN_NAME
available containing the exact pipeline run id, we can stream the Tekton pipeline logs into our GitLab CI log again:
- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
Finally we need to check for Tekton pipeline run's status and exit our GitLab CI pipeline accordingly in order to prevent red Tekton pipelines resulting in green GitLab CI pipelines. Therefore let's check the status of the Tekton pipeline run first. This can be achieved using --output=jsonpath='{.status.conditions[*].reason}'
together with a kubectl get pipelineruns
:
kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}'
Then we pipe the result into a grep
which checks, if Failed
is inside the status.condiditons.reason
field.
Finally we use a bash onliner (which is <expression to check true or false> && command when true || command when false
) to issue the suitable exit
command (see https://askubuntu.com/a/892605):
- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
Now every GitLab CI Pipeline becomes green, when the Tekton Pipeline succeeded - and gets red when the Tekton Pipeline failed. The example project has some logs if you're interested. It's pretty cool to see the Tekton logs inside the GitLab CI logs: