Search code examples
bashkubernetesgithub-actionscicd

Retrieving Token from SSH Master Node in Kubernetes Cluster Setup


I'm currently using GitHub Actions to automate the installation and configuration of a Kubernetes cluster. The process includes setting up the master and then joining other nodes to this cluster.

Everything installs correctly, but I am encountering an issue during the cluster creation phase, specifically with the token distribution from the master node to the other nodes. I am unable to retrieve or pass the token from the master node to the joining nodes, which is a crucial step for them to join the cluster.

Could anyone provide insights or solutions on how to effectively retrieve and distribute the token from the master node to other nodes in a GitHub Actions workflow? Any suggestions or examples of similar implementations would be greatly appreciated.

I removed the conditional statements (IFs) from my script to test things out.

Thank you in advance for your help!

Here is the code:

- name: Create Cluster
    uses: appleboy/ssh-action@v1.0.3
    with:
      host: ${{ env.FIRST_HOST }}
      username: root
      key: ${{ secrets.PRIVATE_KEY }}
      script: |
        # if ! kubectl cluster-info; then
          kubeadm init --apiserver-advertise-address=${{ env.FIRST_HOST }} &&
          mkdir -p $HOME/.kube &&
          cp -i /etc/kubernetes/admin.conf $HOME/.kube/config &&
          chown $(id -u):$(id -g) $HOME/.kube/config &&
          
          kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
          kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
          
          kubectl taint nodes --all node-role.kubernetes.io/control-plane-
          kubectl taint nodes --all node-role.kubernetes.io/master-
            
          

          echo "JOIN_COMMAND=$(kubeadm token create --print-join-command)" >> $GITHUB_ENV
        # fi
    id: master_setup
  - name: Join Other Nodes to Cluster
    # if: ${{ env.JOIN_COMMAND != '' }}
    run: |
      IFS=',' read -ra HOST_ARRAY <<< "${{ secrets.HOSTS }}"
                
      for index in "${!HOST_ARRAY[@]}"; do
        if [ $index -ne 0 ]; then
            ssh -o StrictHostKeyChecking=no root@${HOST_ARRAY[index]} "${{ env.JOIN_COMMAND }}"
        fi
      done

The problem lies within this line:

echo "JOIN_COMMAND=$(kubeadm token create --print-join-command)" >> $GITHUB_ENV


Solution

  • Thanks to Hackerman, I've found a solution to my issue. The key reason why it wasn't working initially is due to the fact that when using the ssh-action in GitHub Actions, it operates outside of the runner environment. This meant I couldn't access GITHUB_ENV. To overcome this, I implemented a new step in my workflow.

    - name: Get Token
      
        run: |
          JOIN_COMMAND=$(ssh -o StrictHostKeyChecking=no root@${{ env.FIRST_HOST }} "kubeadm token create --print-join-command")
          echo "JOIN_COMMAND=$JOIN_COMMAND" >> $GITHUB_ENV