I read other posts about the HDFS configuration problem with Hadoop. However, none of them was helpful. So, I post my question. I followed this tutorial for hadoop v1.2.1. When I am running hadoop fs -ls command I've got this error:
16/08/29 15:20:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
My core-site.xml file is:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/mnt/miczfs/hadoop/tmp/${user.name}</value>
</property>
</configuration>
Also, my hdfs-site.xml file is as follow:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/mnt/miczfs/hadoop/hdfs/${user.name}/namenode</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>localhost:0</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/mnt/miczfs/hadoop/hdfs/${user.name}/datanode</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>localhost:0</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>localhost:0</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>localhost:0</value>
</property>
</configuration>
and the /etc/hosts is this:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.1.1 micrasrelmond.local micrasrelmond #Generated-by-micctrl
172.31.1.1 mic0.local mic0 #Generated-by-micctrl
If it is possible, please help me. Thanks
First check whether namenode is running or not by jps command. if it is running then format name node by the command bin/hadoop namenode -format.
In order to avoid formatting namenode after every restart change hdfs default directory to some other location.