Search code examples
kuberneteslocal-storagepersistent-storagemicrok8s

Default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind


I am trying to create some persistent space for my Microk8s kubernetes project, but without success so far.

What I've done so far is:

1st. I have created a PV with the following yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dev-pv-0001
  labels:
   name: dev-pv-0001
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /data/dev
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - asfweb

After applying it kubernetess is showing: 
NAME         CAPACITY ACCESS   MODES     RECLAIM    POLICY    STATUS     CLAIM    STORAGECLASS
dev-pv-0001  10Gi     RWO                Retain              Available   local-storage     


Name:              dev-pv-0001
Labels:            name=dev-pv-0001
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Available
Claim:
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [asfweb]
Message:
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /data/dev
Events:    <none>

And here is my deployment yaml:

apiVersion: "v1"
kind: PersistentVolumeClaim
metadata:
  name: "dev-pvc-0001"
spec:
 storageClassName: "local-storage"
 accessModes:
    - "ReadWriteMany"
 resources:
  requests:
    storage: "10Gi"
 selector:
    matchLabels:
      name: "dev-pv-0001"
---
# Source: server/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 4000
  selector:
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
---
# Source: server/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: server
      app.kubernetes.io/instance: RELEASE-NAME
  template:
    metadata:
      labels:
        app.kubernetes.io/name: server
        app.kubernetes.io/instance: RELEASE-NAME
    spec:
      imagePullSecrets:
        - name: gitlab-auth
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: server
          securityContext:
            {}
          image: "registry.gitlab.com/asfweb/asfk8s/server:latest"
          imagePullPolicy: Always
          resources:
            {}
          volumeMounts:
            - mountPath: /data/db
              name: server-pvc-0001
      volumes:
            - name: server-pvc-0001
              persistentVolumeClaim:
                claimName: dev-pvc-0001
---
# Source: server/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: RELEASE-NAME-server
  labels:
    helm.sh/chart: server-0.1.0
    app.kubernetes.io/name: server
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
    - hosts:
        - "dev.domain.com"
      secretName: dev.domain.com
  rules:
    - host: "dev.domain.com"
      http:
        paths:
          - path: /?(.*)
            pathType: Prefix
            backend:
              service:
                name: RELEASE-NAME-server
                port:
                  number: 4000

Everything else is working a part of the persistent volume claim part. Here is some more info if that's can help:

kubectl get pvc -A

NAMESPACE                     NAME                           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
controller-micro              storage-controller-0           Bound     pvc-f0f97686-c59f-4209-b349-cacf3cd0f126   20Gi       RWO            microk8s-hostpath   69d
gitlab-managed-apps           prometheus-prometheus-server   Bound     pvc-abc7ea42-8c74-4698-9b40-db2005edcb42   8Gi        RWO            microk8s-hostpath   69d
asfk8s-25398156-development   dev-pvc-0001                   Pending                                                                        local-storage       28m

kubectl describe pvc dev-pvc-0001 -n asfk8s-25398156-development

Name:          dev-pvc-0001
Namespace:     asfk8s-25398156-development
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: asfk8s
               meta.helm.sh/release-namespace: asfk8s-25398156-development
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       asfk8s-server-6c6bc89c7b-hn44d
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  WaitForFirstConsumer  31m (x2 over 31m)    persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  WaitForPodScheduled   30m                  persistentvolume-controller  waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled
  Normal  WaitForPodScheduled   12s (x121 over 30m)  persistentvolume-controller  waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled

kubectl describe pod asfk8s-server-6c6bc89c7b-hn44d -n asfk8s-25398156-development

Name:           asfk8s-server-6c6bc89c7b-hn44d
Namespace:      asfk8s-25398156-development
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=asfk8s
                app.kubernetes.io/name=server
                pod-template-hash=6c6bc89c7b
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/asfk8s-server-6c6bc89c7b
Containers:
  server:
    Image:        registry.gitlab.com/asfweb/asfk8s/server:3751bf19e3f495ac804ae91f5ad417829202261d
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /data/db from server-pvc-0001 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lh7dl (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  server-pvc-0001:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-pvc-0001
    ReadOnly:   false
  default-token-lh7dl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lh7dl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  33m   default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  32m   default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.

Can somebody please help me to fix this problem? Thanks in advance.


Solution

  • the issue is that you are using the node affinity while creating the PV.

    Which think something like you say inform to Kubernetes my disk will attach to this type of node. Due to affinity your disk or PV is attached to one type of specific node only.

    when you are deploying the workload or deployment (POD) it's not getting schedule on that specific node and your POD is not getting that PV or PVC.

    Simple words:

    in simple word just make sure if you adding node affinity to PVC add it to deployment also. So both PVC and pod get scheduled on the same node.

    to resolve this issue

    make sure both POD and PVC schedule at same node add the node affinity to deployment also so POD schedule on that node.

    or else

    Remove the node affinity rule from PV and create a new PV and PVC and use it.

    here is the place where you have mentioned the node affinity rule

    nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - asfweb
    

    i can see in your deployment there is no rule so your POD is getting scheduled anywhere in cluster.

    here you can see the simple example to create the PV and PVC and use it for MySQL Db : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/