Search code examples
hadoopmapreducebigdatahadoop2hortonworks-data-platform

Mapreduce job consuming more resource and changing queue


I have a MapReduce job which is running over more than 170 million records. This is resulting into consuming 98% of queue resource & 89% of cluster resource . Admin team is recommending that they will create new queue with limited configuration, and I should push my job into that queue.

Here are questions, I have :-

1- How can I push my mapreduce job ("hadoop jar") with minimal change to new queue ?

2- As newly created queue has limited resources, what if queue's capacity is full ? Will it result into long run or Job failure ?

3- Is there any other optimal way to prevent job from consuming all resource, we are ok if job runs little longer.

Please advise. Any help will be great.


Solution

  • If you're using Capacity/Fair Share Scheduler, and your admin assigns a queue:

    First Scenario(Capacity):

    Then what will happen is, the job will take a long time to complete but won't get failed.

    If your job consumes all its resources and the other queue has some resources that are not in use by any other job, then your current job can use those resources too.

    To improve performance, you can increase the number of Node managers so that the resources also get increased and moreover, the job will be distributed on more number of nodes which will result into low latency.

    Second Scenario(Fair Share)

    In this case,

    suppose you have a queue of 100% resources, the first job will consume all the resources and if any other job comes in, the resources will be equally divided by the number of jobs, i.e, total resources/no. of jobs.

    Again, the job will continue to run as long as the minimum resources required for that job is been provided. However, time consumption will be more which is not an issue in your case.