Search code examples
hadoopfilesplitting

How to map a set of text as a whole to a node?


Suppose I have a plain text file with the following data:

DataSetOne <br />
content <br />
content <br />
content <br />


DataSetTwo <br />
content <br />
content <br />
content <br />
content <br />

...and so on...

What I want to to is: count how many contents in each data set. For example the result should be

<DataSetOne, 3>, <DataSetTwo, 4>

I am a beginer to hadoop, I wonder if there is a way to map a chunk of data as a whole to a node. for example, set all DataSetOne to node 1 and all DataSetTwo to node 2.

Does anyone can give me an idea how to archive this?


Solution

  • First of all your datasets are split for multiple maps if they are in seperate files or if they exceed the configured blocksize. So if you have one dataset of 128MB and your chunksize is 64mb hadoop will 2-block this file and setup 2 mappers for each.
    This is like the wordcount example in the hadoop tutorials. Like David says you'll need to map the key/value pairs into HDFS and then reduce on them. I would implement that like this:

    // field in the mapper class
    int groupId = 0;
    
    @Override
    protected void map(K key, V value, Context context) throws IOException,
            InterruptedException {
        if(key != groupId)
            groupId = key;
        context.write(groupId, value);
    }
    
    @Override
    protected void reduce(K key, Iterable<V> values,
            Context context)
            throws IOException, InterruptedException {
        int size = 0;
        for(Value v : values){
            size++;
        }
        context.write(key, size);
    }
    

    Like David said aswell you could use combiner. Combiners are simple reducers and are used to save ressources between the map and reduce phase. They can be set in the configuration.