Search code examples
linuxhadoopsqoop

Hadoop error log jvm sqoop


My mistake - after 6-8 hours of running programs on Java i get this log hs_err_pid6662.log

and this

  [testuser@apus ~]$ sh /home/progr/work/import.sh
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: retry: Resource temporarily unavailable
  /usr/bin/hadoop: fork: Resource temporarily unavailable

Programs run every five minutes and try to import/export from oracle

How to fix this?

# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (gcTaskThread.cpp:48), pid=6662, 
tid=0x00007f429a675700
#
---------------  T H R E A D  ---------------

Current thread (0x00007f4294019000):  JavaThread "Unknown thread" 
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]

Stack: [0x00007f429a575000,0x00007f429a676000],  sp=0x00007f429a674550,  
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
code)


VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -


Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102


# JRE version:  (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again

Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k 
free)

I am running programs like sqoop-import,sqoop-export on Java every 5 minutes. example:

#!/bin/bash

hadoop jar /home/progr/import_sqoop/oracle.jar.

CDH version 5.11.1

java version jdk1.8.0_102

OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)

Mem free:

             total       used       free     shared    buffers     cached
 Mem:      24591972   20080336    4511636     132036     334456    2825792
 -/+ buffers/cache:   16920088    7671884
Swap:     12369916    1008664   11361252

Host Memory Usage enter image description here


Solution

  • The maximum heap memory is (by default) limited to 1GB. You need to increase this

    JRE version: (8.0_102-b14) (build )
    jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1- 1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log - Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1- 1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= - Dhadoop.root.logger=INFO,console -

    Try the following for to increase this to 2048MB (or higher if required).

    export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"
    

    Reference: Pig: Hadoop jobs Fail
    https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%[email protected]%3E