Search code examples
hadoopapache-zookeeperfailover

Name node fails after restarting the hadoop HA cluster nodes after power off


I have setup HA hadoop cluster with 2 name nodes and journal nodes with automatic fail-over control . it starts fines when starting after namenode format. But it fails when restarting the cluster. I also tried to up the cluster in the order.

  1. start all journal nodes
  2. start active name node
  3. start standby node (using bootstrap)and start name node
  4. start zkserver on all nodes
  5. start all data nodes.
  6. format zkfc on active node ,then start
  7. format zkfc on standby node ,then start.

it works fine until stage 5 and all nodes are up(both name nodes are up and standby).When i started zkfc , name node fails and getting an error journal node not formated.

(before this step , i started the setup with successfully by formatting the active name node, in the second time i start , i removed name node format in step 2):

how do i starting the setup after shutdown and restart?

<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop/data/nameNode</value>
<final>true</final>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop/data/dataNode</value>
<final>true</final>
</property>

<property>
<name>dfs.replication</name>
<value>2</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

<property>
<name>dfs.nameservices</name>
<value>ha_cluster</value>
</property>

<property>
 <name>dfs.ha.namenodes.ha_cluster</name>
 <value>sajan,sajan2</value>
 </property>

 <property>
 <name>dfs.namenode.rpc-address.ha_cluster.sajan</name>
 <value>192.168.5.249:9000</value>
 </property>

 <property>
 <name>dfs.namenode.rpc-address.ha_cluster.sajan2</name>
 <value>192.168.5.248:9000</value>
 </property>

 <property>
 <name>dfs.namenode.http-address.ha_cluster.sajan</name>
 <value>192.168.5.249:50070</value>
 </property>

 <property>
 <name>dfs.namenode.http-address.ha_cluster.sajan2</name>
 <value>192.168.5.248:50070</value>
 </property>

 <property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://192.168.5.249:8485;192.168.5.248:8485;192.168.5.250:8485/ha_cluster</value>
 </property>
 <property>
 <name>dfs.client.failover.proxy.provider.ha_cluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 </property>
 <property>
 <name>dfs.ha.automatic-failover.enabled</name>
 <value>true</value>
 </property>
 <property>
 <name>ha.zookeeper.quorum</name>
 <value>192.168.5.249:2181,192.168.5.248:2181,192.168.5.250:2181,192.168.5.251:2181,192.168.5.252:2181,192.168.5.253:2181</value>
 </property>
 <property>
 <name>dfs.ha.fencing.methods</name>
 <value>sshfence</value>
 </property>
 <property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/home/hadoop/.ssh/id_rsa</value>
 </property>

</configuration>

Solution

  • If you want to stop the service use the below order. I lost my 2 working days to figure this out.

    1. stop all name nodes.
    2. stop-all journal nodes.
    3. stop-all data nodes.
    4. stop fail over service.
    5. stop zkserver