Search code examples
kuberneteskubectl

How to kubectl exec to freshly created pod?


We use kubectl set image to rollout a new version 2.0.0 of an existing application. We then use kubectl rollout status to wait for the new pod to become ready so that we can run some basic tests.

The problem is, kubectl rollout status returns (implying the new v2 pod is ready) but when we use kubectl exec we ALWAYS land in the old v1 pod.

$ date
Mon 13 Feb 2023 02:33:50 PM CET
$ k set image deploy/myapp myapp=myapp:2.0.0 && k rollout status deploy/myapp
deployment.apps/myapp image updated
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
deployment "myapp" successfully rolled out

Here, we assume the new version is running. Let's check:

$ k exec deploy/myapp -- show_version
1.0.0

Nope, it's still the old version.
Check the deplyoment:

$ k get deploy/myapp
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
myapp   1/1     1            1           273d

Looks ready (K9S shows 1 pod "Terminating" and 1 pod ready).
Check again:

$date
Mon 13 Feb 2023 02:34:00 PM CET
$ k exec deploy/myapp -- show_version
1.0.0

Nope, check the pods:

kubectl get pod | grep myapp-
myapp-79454d746f-zw5kg         1/1     Running       0              31s
myapp-6c484f86d4-2zsk5         1/1     Terminating   0              3m5s

So our pod is running, we just can't exec into it - it always "picks" the terminating pod:

$date
Mon 13 Feb 2023 02:34:10 PM CET
$ k exec deploy/myapp -- show_version
1.0.0

Wait 20-30s:

$ date
Mon 13 Feb 2023 02:34:25 PM CET
$ k exec deploy/myapp -- show_version
2.0.0

Finally we have exec on the correct pod.

Why/how can we wait for the old pod to terminate? OR How can we ensure we exec into the correct pod for testing?


Solution

  • Update

    Even better would be to get the new_pod id and exec directly into that.

    Also possible, yes. Try this:

    k rollout status deploy/myapp >/dev/null && \
      k get po -l app=myapp | grep Running | awk '{print $1}' | xargs -I{}  kubectl exec {} -- show_version
    

    I would love to know what controls that 30s time.

    This can be configured using the terminationGracePeriodSeconds field in the pod's spec. The value defaults to, you guessed it right, 30s. If you're not concerned about data loss (due to the immediate shutdown), it can be set to 0. After that you can directly exec into the new pod:

        spec:
          terminationGracePeriodSeconds: 0
    
    k rollout status deploy/myapp >/dev/null && k exec deploy/myapp -- show_version
    

    While being "Terminated" the old pod is still in phase Running, and the kubectl exec deploy/myapp seems to use the first Running pod of the deployment .

    I would suggest:

    1. Retrieve and store the name of the currently running pod in a temp variable prior to deployment (assuming the pod has the label app=myapp)
    $ old_pod=$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}')
    
    1. Deploy
    $ k apply -f Deployment.yaml
    
    1. Wait until the rollout is done
    $ k rollout status deploy/myapp
    
    1. Wait until the old_pod is deleted
    $ k wait --for=delete pod/$old_pod --timeout -1s
    
    1. Check the new pod
    $ k exec deploy/myapp -- show_version