Search code examples
kuberneteskubectlkubernetes-podkubernetes-deployment

Can not delete pods in Kubernetes


I tried installing dgraph (single server) using Kubernetes.
I created pod using:

kubectl create -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml

Now all I need to do is to delete the created pods.
I tried deleting the pod using:

kubectl delete pod pod-name

The result shows pod deleted, but the pod keeps recreating itself.
I need to remove those pods from my Kubernetes. What should I do now?


Solution

  • The link provided by the op may be unavailable. See the update section

    As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:

    $ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml
    

    Update

    Basically, this is an explanation for the reason.

    Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:

    See, who controls whom:

    • ReplicationController -> Pod(s)
    • ReplicaSet -> Pod(s)
    • Deployment -> ReplicaSet(s) -> Pod(s)
    • StatefulSet -> Pod(s)
    • DaemonSet -> Pod(s)
    • Job -> Pod
    • CronJob -> Job(s) -> Pod

    a -> b means a creates and controls b and the value of field .metadata.ownerReference in b's manifest is the reference of a. For example,

    apiVersion: v1
    kind: Pod
    metadata:
      ...
      ownerReferences:
      - apiVersion: apps/v1
        controller: true
        blockOwnerDeletion: true
        kind: ReplicaSet
        name: my-repset
        uid: d9607e19-f88f-11e6-a518-42010a800195
      ...
    

    This way, deletion of the parent object will also delete the child object via garbase collection.

    So, a's controller ensures that a's current status matches with a's spec. Say, if one deletes b, then b will be deleted. But a is still alive and a's controller sees that there is a difference between a's current status and a's spec. So a's controller recreates a new b obj to match with the a's spec.

    The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.

    $ kubectl get deploy -n {namespace}
    
    $ kubectl delete deploy {deployment name} -n {namespace}
    

    Note Book

    Another problem may arise during deletion is as follows: If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,

    $ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'
    $ kubectl delete -n {namespace} deploy {deployment name}