I'm trying to mount my HDFS using the NFS gateway as it is documented here: http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
Unfortunately, following the documentation step by step does not work for me (Hadoop 2.7.1 on CentOS 6.6). When executing the mount command I receive the following error message:
[root@server1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync server1:/ /hdfsmount/ mount.nfs: mounting server1:/ failed, reason given by server: No such file or directory
I created the folder hdfsmount so that I can say it definitely exists. My questions are now:
Any help is highly apreciated!
I found the problem deep in the logs. When executing the command (see below) to start the nfs3 component of HDFS, the executing user needs permissions to delete /tmp/.hdfs-nfs
which is configured as nfs.dump.dir
in core-site.xml
.
If the permissions are not set, you'll receive a log message like:
15/08/12 01:19:56 WARN fs.FileUtil: Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists. Exception in thread "main" java.io.IOException: Cannot remove current dump directory: /tmp/.hdfs-nfs
Another option is to simply start the nfs component as root.
[root]> /usr/local/hadoop/sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3