I am using Hadoop hadoop-2.6.0, while starting hadoop services unable to start secondarynamenode, datanode, nodemanager services.
Getting Java.net.bind exceptions.
NodeManager:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.createServer(ResourceLocalizationService.java:356)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceStar
NameNode:
2017-10-10 23:58:07,872 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
DataNode:
java.net.BindException: Problem binding to [0.0.0.0:50010] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
On trying netstat -ntpl command the below ports are in already in use
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN -
tcp6 0 0 :::8040 :::* LISTEN -
Please someone suggest me how to kill these ports to resolve this issue.
:~/hadoopinstall/hadoop-2.6.0$ jps
18255 org.eclipse.equinox.launcher_1.3.0.dist.jar
27492 RunJar
12387 Jps
11951 ResourceManager
11469 NameNode
After frequent searches/tries i have found the solution to kill the process(port) without pid.
*******@127:~$ sudo fuser -k 50010/tcp
[sudo] password for muralee1857:
50010/tcp: 1514
*******@127:~$ sudo kill -9 $(lsof -t -i:50010)
*******@127:~$ sudo fuser -k 50010/tcp
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
50090/tcp: 2110
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
*******@127:~$ sudo fuser -k 8040/tcp
8040/tcp: 2304
*******@127:~$ sudo kill -9 $(lsof -t -i:8040)
*******@127:~$ sudo fuser -k 8040/tcp
Now i am able start all hadoop services.
/hadoopinstall/hadoop-2.6.0$ jps
6844 NodeManager
7150 Jps
6547 SecondaryNameNode
6202 NameNode
6702 ResourceManager
6358 DataNode