Search code examples
hadoophdfsclouderacloudera-manager

HDFS Under replicated blocks


I am using Cloudera Manager Free Edition on my "Cluster" with all services on my single machine.

My machine acts as the datanode,namenode as well as the secondary namenode.

Settings in HDFS related to replication,

dfs.replication                                   - 1
dfs.replication.min, dfs.namenode.replication.min - 1
dfs.replication.max                               - 1   

Still I get under-replicated blocks and hence Bad Health,

The Namenode log says,

Requested replication 3 exceeds maximum 1
java.io.IOException: file /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2013_10_21-15_33_53 on client 111.222.333.444
Requested replication 3 exceeds maximum 1
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.verifyReplication(BlockManager.java:858)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1848)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:1771)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1747)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:439)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:207)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44942)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)

I have altered the values,saved, Deployed Client Configuration, Restarted too. It's still the same.

What property do I need to set to make CM read replication factor as 1 instead of 3 ?


Solution

  • It's a client setting. Client wants to replicate file for 3 times. Canary test acts as a client. Looks like you have to tune hdfs canary test settings. Or toy could try to use Cloudera managr and set replication factor prop as final. It would forbid client to change this property.