Search code examples
kuberneteskubernetes-helmamazon-eksamazon-efs

How to allow a non-root user to write to a mounted EFS in EKS


I am having trouble configuring a statically provisioned EFS such that multiple pods, which run as a non-root user, can read and write the file system.

I am using the AWS EFS CSI Driver. My version info is as follows:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.18", GitCommit:"6f6ce59dc8fefde25a3ba0ef0047f4ec6662ef24", GitTreeState:"clean", BuildDate:"2021-04-15T03:31:30Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:53:22Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

I followed the example from the github repo (https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods) updating the volumeHandle appropriately. The busybox containers defined in the specs for the example are able to read and write the file system, but when I add the same PVC to a pod which does not run as the root user the pod is unable to write to the mounted EFS. I have tried a couple other things to get this working as I expected it to:

None of these configurations allowed a non-root user to write to the mounted EFS. What am I missing in terms of configuring a statically provisioned EFS so that multiple pods, all of which run as a non-root user, can read and write in the mounted EFS?

For reference here are the pod definitions:

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
  - name: app1
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
---
apiVersion: v1
kind: Pod
metadata:
  name: app2
spec:
  containers:
  - name: app2
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out2.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
---
apiVersion: v1
kind: Pod
metadata:
  name: app3
spec:
  containers:
  - name: app3
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out3.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  securityContext:
    runAsUser: 1000
    runAsGroup: 1337
    fsGroup: 1337
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

And the SC/PVC/PV:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi  
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
  annotations:
    pv.beta.kubernetes.io/gid: {{ .Values.groupId | quote }}
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-asdf123

Solution

  • I worked out two ways of resolving this and thought I should update this in case someone else runs into the same problem.

    The first, probably better way is to just use a dynamically provisioned EFS PersistentVolume. This way creates an access point in EFS that is shared by all containers that utilize the PersistentVolumeClaim.

    Here is an example of the StorageClass, PersistentVolumeClaim, and a pod that utilizes the PVC.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap
      fileSystemId:  {{ .Values.efsVolumeHandle }}
      directoryPerms: "775"
    reclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: efs-claim
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: efs-sc
      resources:
        requests:
          storage: 5Gi  # Not actually used - see https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: app3
    spec:
      containers:
      - name: app3
        image: busybox
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo $(date -u) >> /data/out3.txt; sleep 5; done"]
        volumeMounts:
        - name: persistent-storage
          mountPath: /data
      securityContext:
        runAsUser: 1000
        runAsGroup: 1337
        fsGroup: 1337
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: efs-claim
    

    Note the directoryPerms (775) specified in the StorageClass, as well as the runAsGroup and fsGroup specified in the Pod. When utilizing this PVC in a Pod that runs as a non-root user shared a user group number is the key.

    runAsUser is only specified to ensure the busybox stuff does not run as root


    The second method is what I worked out initially and probably is the "nuclear" option, but does work for statically provisioned EFS.

    I have omitted the rest of the pod definition for the sake of brevity. You can use an initContainer to ensure certain permissions are set on the mounted EFS volume.

          initContainers:
          - name: fs-permission-update
            image: busybox
            command:
            - chown
            - "root:{{ .Values.groupId }}"
            - "/efs-fs"
            volumeMounts:
            - mountPath: /efs-fs
              name: efs-storage
    

    Again make sure any Pod which mounts the volume and runs as a non-root user uses the fsGroup and runAsGroup to make sure the user is part of the allowed user group.


    In summary, probably don't use statically provisioned EFS, instead use dynamically provisioned EFS. Note that this is specific to the EFS CSI driver for Kubernetes. Check out the EKS CSI Driver GitHub for more examples and some additional details.