As part of rolling updates version 1 pod is rolled up with version 2 pod.
We need to review the logs of shutdown process of service in the pod (version one).
Does rolling update delete the version one pod?
If yes, can we review the logs of deleted pod (version one)? To verify the shutdown process of service in version one pod...
- Does rolling update delete the version one pod?
The short answer is: Yes.
The Deployment updates Pods in a rolling update fashion when
.spec.strategy.type==RollingUpdate
. You can specifymaxUnavailable
andmaxSurge
to control the rolling update process.
See the examples below:
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
In this example there would be one additional Pod (maxSurge: 1
) above the desired number of 2, and the number of available Pods cannot go lower than that number (maxUnavailable: 0
).
Choosing this config, the Kubernetes will spin up an additional Pod, then stop an “old” one. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment. If not, the Pod will be deployed on an already used Node at the cost of resources from other Pods hosted on the same Node.
You can also try something like this:
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
With the example above there would be no additional Pods (maxSurge: 0
) and only a single Pod at a time will be unavailable (maxUnavailable: 1
).
In this case, Kubernetes will first stop a Pod before starting up a new one. The advantage of that is that the infrastructure doesn’t need to scale up but the maximum workload will be lower.
- if yes, can we review the logs of deleted pod(version one)? To verify the shutdown process of service in version one pod...
See the Debug Running Pods docs. You can find several useful ways of checking logs/events such as:
Debugging Pods by executing kubectl describe pods ${POD_NAME}
and checking the reason behind it's failure.
Examining pod logs: with kubectl logs ${POD_NAME} ${CONTAINER_NAME}
or kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Debugging with container exec: by running commands inside a specific container with kubectl exec
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
However, --previous
flag works only if the previous container instance still exists in a Pod. Check out this answer for further options.
Also, see this topic: How to list Kubernetes recently deleted pods?