Search code examples
dockerkubernetespermission-deniednfspersistent-volumes

Kubernetes NFS persistent volumes permission denied


I have an application running over a POD in Kubernetes. I would like to store some output file logs on a persistent storage volume.

In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim. When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.

The following is the json file I used to create the volume:

{
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "task-pv-test"
      },
      "spec": {
        "capacity": {
          "storage": "10Gi"
        },
        "nfs": {
          "server": <IPAddress>,
          "path": "/export"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Delete",
        "storageClassName": "standard"
      }
    }

The following is the POD configuration file

kind: Pod
apiVersion: v1
metadata:
    name: volume-test
spec:
    volumes:
        -   name: task-pv-test-storage
            persistentVolumeClaim:
                claimName: task-pv-test-claim
    containers:
        -   name: volume-test
            image: <ImageName>
            volumeMounts:
            -   mountPath: /home
                name: task-pv-test-storage
                readOnly: false

Is there a way to change permissions?


UPDATE

Here are the PVC and NFS config:

PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-test-claim
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

NFS CONFIG

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "nfs-client-provisioner-557b575fbc-hkzfp",
    "generateName": "nfs-client-provisioner-557b575fbc-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
    "uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
    "resourceVersion": "27228",
    "creationTimestamp": "2018-04-17T12:26:35Z",
    "labels": {
      "app": "nfs-client-provisioner",
      "pod-template-hash": "1136131967"
    },
    "ownerReferences": [
      {
        "apiVersion": "extensions/v1beta1",
        "kind": "ReplicaSet",
        "name": "nfs-client-provisioner-557b575fbc",
        "uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "nfs-client-root",
        "nfs": {
          "server": <IPAddress>,
          "path": "/Kubernetes"
        }
      },
      {
        "name": "nfs-client-provisioner-token-fdd2c",
        "secret": {
          "secretName": "nfs-client-provisioner-token-fdd2c",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "nfs-client-provisioner",
        "image": "quay.io/external_storage/nfs-client-provisioner:latest",
        "env": [
          {
            "name": "PROVISIONER_NAME",
            "value": "<IPAddress>/Kubernetes"
          },
          {
            "name": "NFS_SERVER",
            "value": <IPAddress>
          },
          {
            "name": "NFS_PATH",
            "value": "/Kubernetes"
          }
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "nfs-client-root",
            "mountPath": "/persistentvolumes"
          },
          {
            "name": "nfs-client-provisioner-token-fdd2c",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "nfs-client-provisioner",
    "serviceAccount": "nfs-client-provisioner",
    "nodeName": "det-vkube-s02",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ]
  },
  "status": {
    "phase": "Running",
    "hostIP": <IPAddress>,
    "podIP": "<IPAddress>,
    "startTime": "2018-04-17T12:26:35Z",
    "qosClass": "BestEffort"
  }
}

I have just removed some status information from the nfs config to make it shorter


Solution

  • If you set the proper securityContext for the pod configuration you can make sure the volume is mounted with proper permissions.

    Example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: demo
    spec:
      securityContext:
        fsGroup: 2000 
      volumes:
        - name: task-pv-test-storage
          persistentVolumeClaim:
            claimName: task-pv-test-claim
      containers:
      - name: demo
        image: example-image
        volumeMounts:
        - name: task-pv-test-storage
          mountPath: /data/demo
    

    In the above example the storage will be mounted at /data/demo with 2000 group id, which is set by fsGroup. By setting the fsGroup all processes of the container will also be part of the supplementary group ID 2000, thus you should have access to the mounted files.

    You can read more about pod security context here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/