Search code examples
githubgithub-actions

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order


We have a GitHub Actions workflow consiting of 3 jobs:

  1. provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
  2. install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
  3. install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.

We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:

name: provision

on: [push]

env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_DEFAULT_REGION: 'eu-central-1'

jobs:
  provision-eks-with-pulumi:
    runs-on: ubuntu-latest
    env:
      PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
    outputs:
      kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
    steps:
      ...

      - name: Provision AWS EKS cluster with Pulumi
        id: pulumi-up
        run: |
          pulumi stack select dev
          pulumi up --yes

          echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
          mkdir -p ~/.kube

          echo "Create kubeconfig and supply it for depending Action jobs"
          pulumi stack output kubeconfig > ~/.kube/config
          echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"

      - name: Try to connect to our EKS cluster using kubectl
        run: kubectl get nodes

  install-and-run-argocd-on-eks:
    runs-on: ubuntu-latest
    needs: provision-eks-with-pulumi
    environment:
      name: argocd-dashboard
      url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
    steps:
      - name: Checkout
        uses: actions/checkout@master

      - name: Configure kubeconfig to use with kubectl from provisioning job
        run: |
          mkdir ~/.kube
          echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
          echo "--- Checking connectivity to cluster"
          kubectl get nodes

      - name: Install ArgoCD
        run: ...


  install-and-run-tekton-on-eks:
    runs-on: ubuntu-latest
    needs: install-and-run-argocd-on-eks
    environment:
      name: tekton-dashboard
      url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
    steps:
      - name: Checkout
        uses: actions/checkout@master

      - name: Configure kubeconfig to use with kubectl from provisioning job
        run: |
          mkdir ~/.kube
          echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
          echo "--- Checking connectivity to cluster"
          kubectl get nodes

      - name: Install Tekton Pipelines, Dashboard, Triggers
        run: ...

The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.

How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?


Solution

  • That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].

    So for your workflow it will look like this:

    name: provision
    
    on: [push]
    
    env:
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      AWS_DEFAULT_REGION: 'eu-central-1'
    
    jobs:
      provision-eks-with-pulumi:
        runs-on: ubuntu-latest
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
        outputs:
          kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
        steps:
          ...
    
          - name: Provision AWS EKS cluster with Pulumi
            id: pulumi-up
            run: |
              pulumi stack select dev
              pulumi up --yes
    
              echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
              mkdir -p ~/.kube
    
              echo "Create kubeconfig and supply it for depending Action jobs"
              pulumi stack output kubeconfig > ~/.kube/config
              echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
    
          - name: Try to connect to our EKS cluster using kubectl
            run: kubectl get nodes
    
      install-and-run-argocd-on-eks:
        runs-on: ubuntu-latest
        needs: provision-eks-with-pulumi
        env:
          name: argocd-dashboard
          url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
        steps:
          - name: Checkout
            uses: actions/checkout@master
    
          - name: Configure kubeconfig to use with kubectl from provisioning job
            run: |
              mkdir ~/.kube
              echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
              echo "--- Checking connectivity to cluster"
              kubectl get nodes
    
          - name: Install ArgoCD
            run: ...
    
    
      install-and-run-tekton-on-eks:
        runs-on: ubuntu-latest
        needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
        env:
          name: tekton-dashboard
          url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
        steps:
          - name: Checkout
            uses: actions/checkout@master
    
          - name: Configure kubeconfig to use with kubectl from provisioning job
            run: |
              mkdir ~/.kube
              echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
              echo "--- Checking connectivity to cluster"
              kubectl get nodes
    
          - name: Install Tekton Pipelines, Dashboard, Triggers
            run: ...