Search code examples
javahadoophdfshadoop-yarnhadoop2

setting up Hadoop YARN on ubuntu (single node)


I setup Hadoop YARN (2.5.1) on Ubuntu 13 as a single node cluster. When I run start-dfs.sh, it gives the following output and the process does not start (I confirmed usng jps and ps commands). My bashrc setup is also copied below. Any thoughts on what I need to reconfigure?

bashrc additions:

export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_INSTALL=/opt/hadoop/hadoop-2.5.1
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"

start-dfs.sh output:

14/09/22 12:24:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-namenode-zkserver1.fidelus.com.out
localhost: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
localhost: starting datanode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-datanode-zkserver1.fidelus.com.out
localhost: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is cf:e1:ea:86:a4:0c:cd:ec:9d:b9:bc:90:9d:2b:db:d5.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-secondarynamenode-zkserver1.fidelus.com.out
0.0.0.0: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
14/09/22 12:24:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

The bin directory has the hdfs file and its owner is hduser (I am running the process as hduser). My $HADOOP_INSTALL setting points to the hadoop directory (/opt/hadoop/hadoop-2.5.1). Should I change anything with the permissions, configuration or simply move the directory out of opt and perhaps onto /usr/local?

Update: When I run start-yarn.sh, I get the following message:

localhost: Error: Could not find or load main class org.apache.hadoop.yarn.server.nodemanager.NodeManager

Update I moved the directory to /usr/local but I get the same warning message.

Update I have ResourceManager running as per jps command. However, when I try to start yarn, it fails with the error given above. I can access the resourcemanager UI on port 8088. Any ideas?


Solution

  • Try running namenode with the following (as opposed to using the start-dfs.sh) and see if that works.

        hadoop-daemon.sh start namenode
        hadoop-daemon.sh start secondarynamenode
        hadoop-daemon.sh start datanode
        hadoop-daemon.sh start nodemanager
        mr-jobhistory-daemon.sh start historyserver