Search code examples
kubernetesdocker-desktopstatefulset

Statefulset with replicas : 1 pod has unbound immediate PersistentVolumeClaims


I'm trying to setup , in my single node cluster (Docker Desktop Windows), an elastic cluster. For this, I have created the PV as followed (working)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elastic-pv-data
  labels:
    type: local
spec:
  storageClassName: elasticdata
  accessModes:   
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  hostPath:
    path: "/mnt/data/elastic"

Then here is the configuration :

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: esnode
spec:
  selector:
    matchLabels:
      app: es-cluster # has to match .spec.template.metadata.labels
  serviceName: elasticsearch
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: es-cluster
    spec:
      securityContext:
        fsGroup: 1000
      initContainers:
      - name: init-sysctl
        image: busybox
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
      containers:
      - name: elasticsearch
        resources:
            requests:
                memory: 1Gi
        securityContext:
          privileged: true
          runAsUser: 1000
          capabilities:
            add:
            - IPC_LOCK
            - SYS_RESOURCE
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
        env:
        - name: ES_JAVA_OPTS
          valueFrom:
              configMapKeyRef:
                  name: es-config
                  key: ES_JAVA_OPTS
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /_cluster/health?local=true
            port: 9200
          initialDelaySeconds: 5
        ports:
        - containerPort: 9200
          name: es-http
        - containerPort: 9300
          name: es-transport
        volumeMounts:
        - name: es-data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
    - metadata:
        name: es-data
      spec:
        storageClassName: elasticdata
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 3Gi

And the result is only one "pod" has its pvc binded to the pv, the other one gets an error loop "0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims". Here is the kubectl get pv,pvc result :

NAME                               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
persistentvolume/elastic-pv-data   20Gi       RWO            Retain           Bound    default/es-data-esnode-0   elasticdata             14m

NAME                                     STATUS   VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/es-data-esnode-0   Bound    elastic-pv-data   20Gi       RWO            elasticdata    13m

If I undestood correctly, I should have a second persistantolumeclaim with the following identifier : es-data-esnode-1 Is there something I miss or do not understand correctly ? Thanks for your help

I skip here the non relevant parts (configmap,loadbalancer,..)


Solution

  • Let me add a few details to what was already said both in comments and in Jonas's answer.

    Inferring from the comments, you've not defined a StorageClass named elasticdata. If it doesn't exist, you cannot reference it in your PV and PVC.

    Take a quick look at how hostPath is used to define a PersistentVolume and how it is referenced in a PersistentVolumeClaim. Here you can see that in the example storageClassName: manual is used. Kubernetes docs doesn't say it explicitely but if you take a look at Openshift docs, it says very clearly that:

    A Pod that uses a hostPath volume must be referenced by manual (static) provisioning.

    It's not just some value used to bind PVC request to this specific PV. So if the elasticdata StorageClass hasn't been defined, you should't use it here.

    Second thing. As Jonas already stated in his comment, there is one-to-one binding between PVC and PV so no matter that your PV still has enough capacity, it has been already claimed by a different PVC and is not available any more. As you can read in the official docs:

    A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.

    Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.

    And vice versa. If there is just one 100Gi PV it won't be able to safisfy a request from two PVCs claiming for 50Gi each. Note that in the result of kubectl get pv,pvc you posted, both PV and PVC have capacity of 20Gi although you request in each PVC created from PVC template only 3Gi.

    You don't work here with any dynamic storage provisioner so you need to provide manually as many PersistentVolumes as needed for your use case.

    By the way, instead of using hostPath I would rather recommend you using local volume with properly defined StorageClass. It has a few advantages over HostPath. Additionally an external static provisioner can be run separately for improved management of the local volume lifecycle