Search code examples
kuberneteskubernetes-helm

When does kubernetes helm trigger a pod recreate?


The helm documentation suggests to recreate a pod by setting variable metadata values.

For example:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]

But there a situations, when a pod is not recreated:

  • A pod is erroneous in state CrashLoopBackOff
  • Only Deployment Metadata has changed

I would like to know what events do trigger a pod recreate:

  • Why is the pod in state CrashLoopBackOff not restarted?
  • Why are not all parts of the spec considered to recreate the pod?

Edit

The CrashLookBackOff is an application problem. But if a new image (containing the bugfix) is provided, the pod should be restarted without the need to kill it explicitly.

Is there a cause not to restart the CrashLookBackOff pod?


Solution

  • The template in a Deployment is a PodTemplate. Every time the PodTemplate is changed, a new ReplicaSet is created, and it creates new Pods according to the number of replicas using the PodTemplate.

    kind: Deployment
    spec:
      template:
        # any change here will lead to new Pods
    

    Everytime a new Pod is created from a template, it will be identical as the previous pods.

    A CrashLoopBackOff is a Pod-level problem, e.g. it may be a problem with the application.

    But if a new image (containing the bugfix) is provided, the pod should be restarted without the need to kill it explicitly.

    If a new image is provided, it should have its own unique name. That means that whenever you change the image, you have to change the image name. A change of the image name is a change in the PodTemplate, so it will always create new Pods - and delete but not reuse old Pods.