I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true
to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info
error.
I am running kubectl
with --local
flag so the config should not be needed. Does anyone know what could be the reason why kubectl
suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.
Output of the failed step in GitHub Actions workflow:
Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"authserver","image":"test.azurecr.io/authserver:20201230-1712-d3a2ae4"}]}}}}' -o yaml > temp.yaml && mv temp.yaml authserver-deployment.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
Error: Process completed with exit code 1.
Output of kubectl version
:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0",
GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean",
BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc",
Platform:"linux/amd64"}
I ended up using sed to replace the string with image
- name: Update manifests with new images
working-directory: test/cloud
run: |
sed -i "s~image:.*$~image: ${{ steps.image_tags.outputs.your_new_tag }}~g" your-deployment.yaml
Works like a charm now.