Search code examples
amazon-web-serviceshadoopamazon-s3amazon-emr

distcp fails when copying from s3 to hdfs


Created a cluster (Spark Amazon EMR) and tried to run in command line.

CLI:

hadoop distcp s3a://bucket/file1 /data

Exception:

org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:162)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Solution

  • Please check the properties of yarn-site.xml in /etc/hadoop/conf/yarn-site.xml,

     <property>
      <name>yarn.nodemanager.aux-services</name> 
      <value>mapreduce_shuffle,spark_shuffle</value>
     </property>
    
     <property>
       <name>yarn.nodemanager.aux-services.spark_shuffle.class</name> 
       <value>org.apache.spark.network.yarn.YarnShuffleService</value> 
     </property>
    
    <property>
      <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> 
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    

    If mapreduce_shuffle is not there, please add the properties and restart the yarn services.

    sudo stop hadoop-yarn-nodemanager
    sudo start hadoop-yarn-nodemanager
    

    I recommend to use s3-distcp utility because its already available in EMR cluster.

    s3-dist-cp --src s3://my-tables/incoming/hourly_table --dest /data/hdfslocation/path
    

    https://aws.amazon.com/blogs/big-data/seven-tips-for-using-s3distcp-on-amazon-emr-to-move-data-efficiently-between-hdfs-and-amazon-s3/