Search code examples
javaambaridatanodehdp

Datanode + Error occurred during initialization of VM Too small initial heap


we restart the dastanodes on our cluster

we have 15 Data node machines in the ambari cluster while each datanode machine have 128G RAM

versions - ( HDP - 2.6.4 and ambari version 2.6.1 )

but datanode failed to start on the follwing error

Error occurred during initialization of VM
Too small initial heap

this is strange because dtnode_heapsize is 8G ( DataNode maximum Java heap size = 8G ) and from the log we can see also

InitialHeapSize=8192 -XX:MaxHeapSize=8192

so we not understand how it can be

dose - initial heap size related to DataNode maximum Java heap size ?

the log from datanode machine

Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 197804180k(12923340k free), swap 16777212k(16613164k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:GCLogFileSize=1024000 -XX:InitialHeapSize=8192 -XX:MaxHeapSize=8192 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:NumberOfGCLogFiles=5 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC 
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker01.sys242.com.out <==
Error occurred during initialization of VM
Too small initial heap
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 772550
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

another log example:

resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start datanode'' returned 1. starting datanode, logging to 
Error occurred during initialization of VM
Too small initial heap

Solution

  • The value you are providing is specified in bytes. Should be InitialHeapSize=8192m -XX:MaxHeapSize=8192m

    See https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html