Search code examples
kubernetespersistent-volumes

Cant mount local host path in local kind cluster


Below is my kubernetes file and I need to do two things

  1. need to mount a folder with a file
  2. need to mount a file with startup script

I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.

The below is the updated Service,Deployment,PVC and PV

kubernetes.yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.service.type: nodeport
    creationTimestamp: null
    labels:
      io.kompose.service: zookeeper
    name: zookeeper
  spec:
    ports:
    - name: "2181"
      port: 2181
      targetPort: 2181
    selector:
      io.kompose.service: zookeeper
    type: NodePort
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      kompose.service.type: nodeport
    creationTimestamp: null
    name: zookeeper
  spec:
    replicas: 1
    selector:
      matchLabels:
        io.kompose.service: zookeeper
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: zookeeper
      spec:
        containers:
        - image: bitnami/zookeeper:3
          name: zookeeper
          ports:
          - containerPort: 2181
          env:
          - name: ALLOW_ANONYMOUS_LOGIN
            value: "yes"
          resources: {}
          volumeMounts:
          - mountPath: /bitnami/zoo
            name: bitnamidockerzookeeper-zookeeper-data
        restartPolicy: Always
        volumes:
        - name: bitnamidockerzookeeper-zookeeper-data
          #hostPath:
            #path: /tmp/tmp1
          persistentVolumeClaim:
            claimName: bitnamidockerzookeeper-zookeeper-data
  status: {}

- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    creationTimestamp: null
    labels:
      io.kompose.service: bitnamidockerzookeeper-zookeeper-data
      type: local
    name: bitnamidockerzookeeper-zookeeper-data
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 100Mi
  status: {}
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: foo
  spec:
    storageClassName: manual
    claimRef:
      name: bitnamidockerzookeeper-zookeeper-data
    capacity:
      storage: 100Mi
    accessModes:
      - ReadWriteMany
    hostPath:
      path: /tmp/tmp1
  status: {}
kind: List
metadata: {}


Solution

  • There are a few potential issues in your YAML.

    First, the accessModes of the PersistentVolume doesn't match the one of the PersistentVolumeClaim. One way to fix that is to list both ReadWriteMany and ReadWriteOnce in the accessModes of the PersistentVolume.

    Then, the PersistentVolume doesn't specify a storageClassName. As a result, if you have a StorageClass configured to be the default StorageClass on your cluster (you can see that with kubectl get sc), it will automatically provision a PersistentVolume dynamically instead of using the PersistentVolume that you declared. So you need to specify a storageClassName. The StorageClass doesn't have to exist for real (since we're using static provisioning instead of dynamic anyway).

    Next, the claimRef in PersistentVolume needs to mention the Namespace of the PersistentVolumeClaim. As a reminder: PersistentVolumes are cluster resources, so they don't have a Namespace; but PersistentVolumeClaims belong to the same Namespace as the Pod that mounts them.

    Another thing is that the path used by Zookeeper data in the bitnami image is /bitnami/zookeeper, not /bitnami/zoo.

    You will also need to initialize permissions in that volume, because by default, only root will have write access, and Zookeeper runs as non-root here, and won't have write access to the data subdirectory.

    Here is an updated YAML that addresses all these points. I also rewrote the YAML to use the YAML multi-document syntax (resources separated by ---) instead of the kind: List syntax, and I removed a lot of fields that weren't used (like the empty status: fields and the labels that weren't strictly necessary). It works on my KinD cluster, I hope it will also work in your situation.

    If your cluster has only one node, this will work fine, but if you have multiple nodes, you might need to tweak things a little bit to make sure that the volume is bound to a specific node (I added a commented out nodeAffinity section in the YAML, but you might also have to change the bind mode - I only have a one-node cluster to test it out right now; but the Kubernetes documentation and blog have abundant details on this; https://stackoverflow.com/a/69517576/580281 also has details about this binding mode thing).

    One last thing: in this scenario, I think it might make more sense to use a StatefulSet. It would not make a huge difference but would more clearly indicate intent (Zookeeper is a stateful service) and in the general case (beyond local hostPath volumes) it would avoid having two Zookeeper Pods accessing the volume simultaneously.

    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper
    spec:
      ports:
      - name: "2181"
        port: 2181
        targetPort: 2181
      selector:
        io.kompose.service: zookeeper
      type: NodePort
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: zookeeper
    spec:
      replicas: 1
      selector:
        matchLabels:
          io.kompose.service: zookeeper
      template:
        metadata:
          labels:
            io.kompose.service: zookeeper
        spec:
          initContainers:
          - image: alpine
            name: chmod
            volumeMounts:
            - mountPath: /bitnami/zookeeper
              name: bitnamidockerzookeeper-zookeeper-data
            command: [ sh, -c, "chmod 777 /bitnami/zookeeper" ]
          containers:
          - image: bitnami/zookeeper:3
            name: zookeeper
            ports:
            - containerPort: 2181
            env:
            - name: ALLOW_ANONYMOUS_LOGIN
              value: "yes"
            volumeMounts:
            - mountPath: /bitnami/zookeeper
              name: bitnamidockerzookeeper-zookeeper-data
          volumes:
          - name: bitnamidockerzookeeper-zookeeper-data
            persistentVolumeClaim:
              claimName: bitnamidockerzookeeper-zookeeper-data
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: bitnamidockerzookeeper-zookeeper-data
    spec:
      storageClassName: manual
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: tmp-tmp1
    spec:
      storageClassName: manual
      claimRef:
        name: bitnamidockerzookeeper-zookeeper-data
        namespace: default
      capacity:
        storage: 100Mi
      accessModes:
        - ReadWriteMany
        - ReadWriteOnce
      hostPath:
        path: /tmp/tmp1
      #nodeAffinity:
      #  required:
      #    nodeSelectorTerms:
      #      - matchExpressions:
      #        - key: kubernetes.io/hostname
      #          operator: In
      #          values:
      #          - kind-control-plane