Search code examples
hadoopacl

hadoop setfacl --set not working


Using Hadoop-2.6.0 secured with kerberos. Trying to set ACL for a directory using below command

Command

hadoop fs -setfacl --set user::rwx,user:user1:---,group::rwx,other::rwx /test1

It gives message as "Too many arguments"

Error

-setfacl: Too many arguments
Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]

I am sure the command syntax is correct and moreover the same command works fine when executing from REST API.

Need a solution for this.


Solution

  • This is correct syntax for the HDFS setfacl command. If you're running from Windows cmd.exe, then you may need to wrap command line parameters in quotes if they contain any of the cmd.exe parameter delimiters. In cmd.exe, the parameter delimiters are space, comma, semicolon and equal sign. The syntax for an ACL spec contains commas, so we need to wrap that in quotes. Otherwise, cmd.exe splits it into multiple arguments before invoking the Hadoop code, and this is why you see an error for too many arguments. When I ran this on Windows, it worked:

    hadoop fs -setfacl --set "user::rwx,user:user1:---,group::rwx,other::rwx" /test1