Search code examples
hadoopamazon-web-serviceshadoop-streamingemr

How do I submit more than one job to Hadoop in a step using the Elastic MapReduce API?


Amazon EMR Documentation to add steps to cluster says that a single Elastic MapReduce step can submit several jobs to Hadoop. However, Amazon EMR Documentation for Step configuration suggests that a single step can accommodate just one execution of hadoop-streaming.jar (that is, HadoopJarStep is a HadoopJarStepConfig rather than an array of HadoopJarStepConfigs).

What is the proper syntax for submitting several jobs to Hadoop in a step?


Solution

  • Like Amazon EMR Documentation says, you can create a cluster to run some script my_script.sh on the master instance in a step:

    aws emr create-cluster --name "Test cluster" --ami-version 3.11 --use-default-roles
        --ec2-attributes KeyName=myKey --instance-type m3.xlarge --instance count 3
        --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args=["s3://mybucket/script-path/my_script.sh"]
    

    my_script.sh should look something like this:

    #!/usr/bin/env bash
    
    hadoop jar my_first_step.jar [mainClass] args... &
    hadoop jar my_second_step.jar [mainClass] args... &
    .
    .
    .
    wait
    

    This way, multiple jobs are submitted to Hadoop in the same step---but unfortunately, the EMR interface won't be able to track them. To do this, you should use the Hadoop web interfaces as shown here, or simply ssh to the master instance and explore with mapred job.