Search code examples
hadoophdfsadminbigdata

hadoop 2.6.2 , mkdir : Couldn't create proxy provider null


i am not able to create new file or directory nor able to list the existing files or directory

i am using below command to do the operation ,could you please suggest

 hduser@c:/usr/local/hadoop$ jps
8546 ResourceManager
9181 Jps
1503 NameNode
8674 NodeManager
4398 DataNode
hduser@c:/usr/local/hadoop$ bin/hadoop fs -ls /
ls: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$ bin/hadoop fs -mkdir /books
mkdir: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$

below is my hdfs-site.xml ,which i am using it .

<configuration>
    <property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    </property>

<property>
<name>dfs.replicaion</name>
<value>2</value>
<description>to specifiy replication</description>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/h3iHA/name</value>
<final>true</final>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/h3iHA/data2</value>
<final>true</final>
</property>

<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>c:9000</value>
</property>

<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>a:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>c:50070</value>
</property>

<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>a:50070</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>file:///mnt/filer</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.configuredFailoverProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hduser/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
       shell(/bin/true)
</value>
</property>
</configuration>

core file ,which is same for both nodes

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
</configuration>

Solution

  • The Java class name set for the property dfs.client.failover.proxy.provider.mycluster is incorrect. It is ConfiguredFailoverProxyProvider and not configuredFailoverProxyProvider.

    Edit the value of this property in hdfs-site.xml

    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>