Search code examples
kubernetesgithub-actionscontinuous-deploymentamazon-eksargocd

How to login to ArgoCD CLI non-interactive in CI like GitHub Actions?


We have a full-blown setup using AWS EKS with Tekton installed and want to use ArgoCD for application deployment.

As the docs state we installed ArgoCD on EKS in GitHub Actions with:

  - name: Install ArgoCD
    run: |
      echo "--- Create argo namespace and install it"
      kubectl create namespace argocd --dry-run=client -o yaml | kubectl apply -f -
      kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

We also exposed the ArgoCD server (incl. dashboard) as the docs told us:

  - name: Expose ArgoCD Dashboard
    run: |
      echo "--- Expose ArgoCD Dashboard via K8s Service"
      kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

      echo "--- Wait until Loadbalancer url is present (see https://stackoverflow.com/a/70108500/4964553)"
      until kubectl get service/argocd-server -n argocd --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done

Finally we installed argocd CLI with brew:

      echo "--- Install ArgoCD CLI"
      brew install argocd

Now how can we do a argocd login with GitHub Actions (without human interaction)? The argocd login command wants a username and password...


Solution

  • The same docs tell us how to extract the password for argo with:

    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
    

    Obtaining the ArgoCD server's hostname is also no big deal using:

    kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}'
    

    And as the argocd login command has the parameters --username and --password, we can craft our login command like this:

    argocd login $(kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}') --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo) --insecure
    

    Mind the --insecure to prevent the argo CLI from asking things like WARNING: server certificate had error: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a5f715808162c48c1af54069ba37db0e-1371850981.eu-central-1.elb.amazonaws.com. Proceed insecurely (y/n)?.

    The successful login should somehow look like this in the GitHub Actions UI (see a full log here):

    'admin:login' logged in successfully
    Context 'a5f715808162c48c1af54069ba37db0e-1371850981.eu-central-1.elb.amazonaws.com' updated
    

    Now your GitHub Actions workflow should be able to interact with the ArgoCD server.

    Prevent error FATA[0000] dial tcp: lookup a965bfb530e8449f5a355f221b2fd107-598531793.eu-central-1.elb.amazonaws.com on 8.8.8.8:53: no such host

    This error arises if the argocd-server Kubernetes service is freshly installed right before the argocd login command is run. Then the argocd login command failes for some time until it finally will work correctly.

    Assuming some DNS propagation issues we can prevent this error from breaking our CI pipeline by wrapping our argocd login command into an until like already done in this answer. The full command will then look like this:

    until argocd login $(kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}') --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo) --insecure; do : ; done
    

    In GitHub Actions this will then look somehow like this:

    --- Login argocd CLI - now wrapped in until to prevent FATA[0000] dial tcp: lookup 12345.eu-central-1.elb.amazonaws.com on 8.8.8.8:53: no such host
    time="2022-02-21T12:57:32Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    time="2022-02-21T12:57:35Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    time="2022-02-21T12:57:37Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    [...]
    time="2022-02-21T12:58:27Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    time="2022-02-21T12:58:30Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    time="2022-02-21T12:58:32Z" level=fatal msg="dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host"
    'admin:login' logged in successfully
    Context 'a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com' updated
    

    Here's also a log.