Search code examples
amazon-web-servicesamazon-ec2dockeramazon-elastic-beanstalknfs

Docker nfs4 mount on Elastic Beanstalk


I am stuck accessing a nfs4 share inside a docker container, running on Elastic Beanstalk.

Netshare is up and running on the EC2 instance running the Docker container. Mounting the nfs share manually on the instance works, I can access the share on the EC2 instance without problems.

However, when I run a container, trying to mount a nfs4 volume, the files are not appearing inside the container.

I do this. First, start the netshare daemon on the Docker host:

sudo ./docker-volume-netshare nfs
INFO[0000] == docker-volume-netshare :: Version: 0.18 - Built: 2016-05-27T20:14:07-07:00 == 
INFO[0000] Starting NFS Version 4 :: options: '' 

Then, on the Docker host, start the docker container. Use -v to create a volume mounting the nfs4 share:

sudo docker run --volume-driver=nfs -v ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates:/home/ec2-user/xxx -ti aws_beanstalk/current-app /bin/bash
root@0a0c3de8a97e:/usr/src/app#

That worked, according to the netshare daemon:

INFO[0353] Mounting NFS volume ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates on /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates 

So I try listing the contents of /home/ec2-user/xxx inside the newly launched container - but its empty?!

root@0a0c3de8a97e:/usr/src/app# ls /home/ec2-user/xxx/
root@0a0c3de8a97e:/usr/src/app# 

Strangely enough, the nfs volume has been mounted correctly on the host:

[ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo ls -lh /var/lib/docker-volumes/netshare/nfs/ec2-xxx-xxx-xxx-xxx.us-west-2.compute.amazonaws.com/home/ec2-user/nfs-share/templates | head -3
total 924K
drwxr-xr-x 5 ec2-user ec2-user 4,0K 29. Dez 14:12 file1
drwxr-xr-x 4 ec2-user ec2-user 4,0K  9. Mai 17:20 file2

Could this be a permission problem? Both the nfs server and client are using the ec2-user user/group. The docker container is running as root.

What am I missing?

UPDATE

If i start the container in --privileged mode, mounting the nfs share directly inside the container becomes possible:

sudo docker run --privileged -it aws_beanstalk/current-app /bin/bash
mount -t nfs4 ec2-xxxx-xxxx-xxxx-xxxx.us-west-2.compute.amazonaws.com:/home/ec2-user/nfs-share/templates /mnt/
ls -lh /mnt | head -3
total 924K
drwxr-xr-x 5 500 500 4.0K Dec 29 14:12 file1
drwxr-xr-x 4 500 500 4.0K May  9 17:20 file2

Unfortunately, this does not solve the problem, because Elastic Beanstalk does not allow privileged containers (unlike ECS).

UPDATE 2

Here's another workaround:

  1. mount the nfs share on the host into /target
  2. restart docker on the host
  3. run container docker run -it -v /target:/mnt image /bin/bash

/mnt is now populated as expected.


Solution

  • @sebastian's "UPDATE 2" got me on the right track (thanks @sebastian).

    But for others who may reach this question via Google like I did, here's exactly how I was able to automatically mount an EFS (NFSv4) file system on Elastic Beanstalk and make it available to containers.

    Add this .config file:

    # .ebextensions/01-efs-mount.config
    commands:
      01umount:
        command: umount /mnt/efs
        ignoreErrors: true
      02mkdir:
        command: mkdir /mnt/efs
        ignoreErrors: true
      03mount:
        command: mount -t nfs4 -o vers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).EFS_FILE_SYSTEM_ID.efs.AWS_REGION.amazonaws.com:/ /mnt/efs
      04restart-docker:
        command: service docker stop && service docker start
      05restart-ecs:
        command: docker start ecs-agent
    

    Then eb deploy. After the deploy finishes, SSH to your EB EC2 instance and verify that it worked:

    ssh ec2-user@YOUR_INSTANCE_IP
    ls -la /mnt/efs
    

    You should see the files in your EFS filesystem. However, you still need to verify that the mount is readable and writable within containers.

    sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
    

    You should see the same file list.

    sudo docker run -v /mnt/efs:/nfs debian:jessie touch /nfs/hello
    sudo docker run -v /mnt/efs:/nfs debian:jessie ls -la /nfs
    

    You should see the file list plus the new hello file.

    ls -la /mnt/efs
    

    You should see the hello file outside of the container as well.

    Finally, here's how you use -v /mnt/efs:/nfs in your Dockerrun.aws.json.

    {
      "AWSEBDockerrunVersion": 2,
      "containerDefinitions": [
        {
          "image": "AWS_ID.dkr.ecr.AWS_REGION.amazonaws.com/myimage:latest",
          "memory": 128,
          "mountPoints": [
            {
              "containerPath": "/nfs",
              "sourceVolume": "efs"
            }
          ],
          "name": "myimage"
        }
      ],
      "volumes": [
        {
          "host": {
            "sourcePath": "/mnt/efs"
          },
          "name": "efs"
        }
      ]
    }