Search code examples
hadoopmapreduceooziehueoozie-coordinator

MapReduce oozie workflow using Hue


I am working on AWS and trying to create oozie workflow for map only job using hue. I took mapreduce action for it. After trying many approach, I am not able to complete it. I ran my job from CLI and it is working fine.

I created one dir name mapreduce dir in HDFS and put my driver.java and mapper.java in it. Under mapreduce dir, I create lib directory and put my runnable jar in it. I am attaching screen shot of hue interface.

enter image description here

I am missing something or it seems I am not able to place runnable jar in appropriate place.

I also want to add an extra parameter apart from input and output directory in Hue. How I can do this?

My doubt lies on

2015-11-06 14:56:57,679 WARN [main] org.apache.hadoop.mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).

When I tried to see oozie:action log. I got the below message.

No tasks found for job job_1446129655727_0306.

UPDATE 1

import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.io.*;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/*
 * Driver class to decompress the zip files.
 */
public class DecompressJob extends Configured implements Tool {

    public static void main(String[] args) throws Exception {
        int res = ToolRunner.run(new Configuration(), new DecompressJob(), args);
        System.exit(res);
    }

    public int run(String[] args) throws Exception {

        Configuration conf = new Configuration();
        conf.set("unzip_files", args[2]);

        Job JobConf = Job.getInstance(conf);
        JobConf.setJobName("mapper class");

        try {
            FileSystem fs = FileSystem.get(getConf());
            if (fs.isDirectory(new Path(args[1]))) {
                fs.delete(new Path(args[1]), true);
            }
        } catch (Exception e) {
        }


        JobConf.setJarByClass(DecompressJob.class);
        JobConf.setOutputKeyClass(LongWritable.class);
        JobConf.setOutputValueClass(Text.class);

        JobConf.setMapperClass(DecompressMapper.class);
        JobConf.setNumReduceTasks(0);
        Path input = new Path(args[0]);
        Path output = new Path(args[1]);

        FileInputFormat.addInputPath(JobConf, input);
        FileOutputFormat.setOutputPath(JobConf, output);

        return JobConf.waitForCompletion(true) ? 0 : 1;
    }
}

I also updated the screen shot added few more properties. Posting error log also

2015-11-07 02:43:31,074 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
  2015-11-07 02:43:31,110 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.RuntimeException: java.lang.ClassNotFoundException: Class /user/Ajay/rad_unzip/DecompressMapper.class not found
  at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074)
  at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:171)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166)
  Caused by: java.lang.ClassNotFoundException: Class /user/uname/out/DecompressMapper.class not found
  at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1980)
  at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2072)
  ... 8 more

  2015-11-07 02:43:31,114 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
  2015-11-07 02:43:31,125 WARN [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://uname/out/output/_temporary/1/_temporary/attempt_1446129655727_0336_m_000001_1

Solution

  • You should bundle your dirver and mapper in the same jar. To pass the new arguments, you can directly click on "add property" and give a random propertyname and propertyvalue. In your MR program, you can get the value using the "getConf().get("propertyName")" method.