Search code examples
hadoopmapreducehadoop-yarn

Improve identity mapper in Wordcount


I have created a map method that reads the map output of the wordcount example [1]. This example is away from using the IdentityMapper.class that MapReduce offers, but this is the only way that I have found to make a working IdentityMapper for the Wordcount. The only problem is that this Mapper is taking much more time than I wanted. I am starting to think that maybe I am doing some redundant stuff. Any help to improve my WordCountIdentityMapper code?

[1] Identity mapper

public class WordCountIdentityMapper extends MyMapper<LongWritable, Text, Text, IntWritable> {
    private Text word = new Text();

    public void map(LongWritable key, Text value, Context context
    ) throws IOException, InterruptedException {
        StringTokenizer itr = new StringTokenizer(value.toString());
        word.set(itr.nextToken());
        Integer val = Integer.valueOf(itr.nextToken());
        context.write(word, new IntWritable(val));
    }

    public void run(Context context) throws IOException, InterruptedException {
        while (context.nextKeyValue()) {
            map(context.getCurrentKey(), context.getCurrentValue(), context);
        }
    }
}

[2] Map class that generated the mapoutput

public static class MyMap extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, Context context
    ) throws IOException, InterruptedException {
        StringTokenizer itr = new StringTokenizer(value.toString());

        while (itr.hasMoreTokens()) {
            word.set(itr.nextToken());
            context.write(word, one);
        }
    }

    public void run(Context context) throws IOException, InterruptedException {
        try {
            while (context.nextKeyValue()) {
                map(context.getCurrentKey(), context.getCurrentValue(), context);
            }
        } finally {
            cleanup(context);
        }
    }
}

Thanks,


Solution

  • The solution to this is replace the StringTokenizer by the indexOf() method. It works much better. I get better performance.