Search code examples
pythonhadoop-streamingelastic-map-reduceamazon-emr

Amazon Elastic MapReduce - SIGTERM


I have an EMR streaming job (Python) which normally works fine (e.g. 10 machines processing 200 inputs). However, when I run it against large data sets (12 machines processing a total of 6000 inputs, at about 20 seconds per input), after 2.5 hours of crunching I get the following error:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 143
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:372)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:586)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:441)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:377)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

If I am reading this correctly, the subprocess failed with code 143 because someone sent a SIGTERM signal to the streaming job.

Is my understanding correct? If so: When would the EMR infrastructure send a SIGTERM?


Solution

  • I figured out what was happening, so here's some information if anyone else experiences similar problems.

    The key to me was to look at the "jobtracker" logs. These live in your task's logs/ folder on S3, under:

    <logs folder>/daemons/<id of node running jobtracker>/hadoop-hadoop-jobtracker-XXX.log.
    

    There were multiple lines of the following kind:

    2012-08-21 08:07:13,830 INFO org.apache.hadoop.mapred.TaskInProgress 
      (IPC Server handler 29 on 9001): Error from attempt_201208210612_0001_m_000015_0: 
      Task attempt_201208210612_0001_m_000015_0 failed to report status 
      for 601 seconds. Killing!
    

    So my code was timing out, and it was being killed (it was going beyond the 10 minute task timeout). 10 minutes I wasn't doing any I/Os, which was certainly not expected (I would typically do an I/O every 20 seconds).

    I then discovered this article:

    http://devblog.factual.com/practical-hadoop-streaming-dealing-with-brittle-code

    "In one of our science projects, we have a few Hadoop Streaming jobs that run over ruby and rely on libxml to parse documents. This creates a perfect storm of badness – the web is full of really bad html and libxml occasionally goes into infinite loops or outright segfaults. On some documents, it always segfaults."

    It nailed it. I must be experiencing one of these "libxml going into infinite loop" situations (I am using libxml heavily -- only with Python, not Ruby).

    The final step for me was to trigger skip mode (instructions here: Setting hadoop parameters with boto?).