Search code examples
hadoophdfshadoop-yarn

Hadoop 2.2 - datanode doesn't start up


I had Hadoop 2.4 this morning (see my previous 2 questions). Now I removed it and installed 2.2 as I had issues with 2.4, and also as I think 2.2 is the latest stable release. Now I followed the tutorial here:

http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1

I am pretty sure I did everything right but I am facing similar issues again.

When I run jps it is obvious that the data node is not starting up.

What am I doing wrong again?

hduser@test02:~$ start-dfs.sh
14/06/06 18:12:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-test02.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
localhost: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
localhost: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-test02.out
0.0.0.0: Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
0.0.0.0: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/06 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@test02:~$ jps
2201 Jps
hduser@test02:~$ jps
2213 Jps
hduser@test02:~$ start-yarn
start-yarn: command not found
hduser@test02:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-test02.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-test02.out
hduser@test02:~$ jps
2498 NodeManager
2264 ResourceManager
2766 Jps
hduser@test02:~$ jps
2784 Jps
2498 NodeManager
2264 ResourceManager
hduser@test02:~$ jps
2498 NodeManager
2264 ResourceManager
2796 Jps
hduser@test02:~$

Solution

  • My problem was that I took these instructions from the tutorial too literally.

    Paste following between <configuration>
    fs.default.name
    hdfs://localhost:9000

    I suspected this was wrong while doing it but still I did it.
    It seemed incorrect as the core-site.xml file is in XML format.
    So actually, it needs to look like this.

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    </property>
    

    Changing it to this fixed my problem.