Search code examples
kuberneteskubernetes-pod

Cannot update Kubernetes pod from yaml generated from kubectl get pod pod_name -o yaml


I have a pod in my kubernetes which needed an update to have securityContext. So generated a yaml file using -

kubectl get pod pod_name -o yaml > mypod.yaml

After updating the required securityContext and executing command -

kubectl apply -f mypod.yaml

no changes are observed in pod.

Where as a fresh newly created yaml file works perfectly fine. new yaml file -

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: default
spec:
  securityContext:
    runAsUser: 1010
  containers:
  - command:
    - sleep
    - "4800"
    image: ubuntu
    name: myubuntuimage

Solution

  • Immutable fields

    In Kubernetes you can find information about Immutable fields.

    A lot of fields in APIs tend to be immutable, they can't be changed after creation. This is true for example for many of the fields in pods. There is currently no way to declaratively specify that fields are immutable, and one has to rely on either built-in validation for core types, or have to build a validating webhooks for CRDs.

    Why ?

    There are resources in Kubernetes which have immutable fields by design, i.e. after creation of an object, those fields cannot be mutated anymore. E.g. a pod's specification is mostly unchangeable once it is created. To change the pod, it must be deleted, recreated and rescheduled.

    Editing existing pod configuration

    If you want to apply new config with security context using kubectl apply you will get error like below:

    The Pod "mypod" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
    

    Same output will be if you would use kubectl patch kubectl patch pod mypod -p '{"spec":{"securityContext":{"runAsUser":1010}}}'

    Also kubectl edit will not change this specific configuration

    $ kubectl edit pod
    Edit cancelled, no changes made.
    

    Solution

    If you need only one pod, you must delete it and create new one with requested configuration.

    Better solution is to use resource which will make sure to fulfil some own requirements, like Deployment. After change of the current configuration, deployment will create new Replicaset which will create new pods with new configuration.

    by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.