I'm writing a task with mrjob to compute various statistics using the Google Ngrams data: https://aws.amazon.com/datasets/8172056142375670
I developed & tested my script locally using an uncompressed subset of the data in tab-delimited text. Once I tried to run the job, I got this error:
Traceback (most recent call last):
File "ngram_counts.py", line 74, in <module>
MRNGramCounts.run()
File "/usr/lib/python2.6/dist-packages/mrjob/job.py", line 500, in run
mr_job.execute()
File "/usr/lib/python2.6/dist-packages/mrjob/job.py", line 509, in execute
self.run_mapper(self.options.step_num)
File "/usr/lib/python2.6/dist-packages/mrjob/job.py", line 574, in run_mapper
for out_key, out_value in mapper(key, value) or ():
File "ngram_counts.py", line 51, in mapper
(ngram, year, _mc, _pc, _vc) = line.split('\t')
ValueError: need more than 2 values to unpack
(while reading from s3://datasets.elasticmapreduce/ngrams/books/20090715/eng-1M/5gram/data)
Presumably this is because of the public data set's compression scheme (from the URL link above):
We store the datasets in a single object in Amazon S3. The file is in sequence file format with block level LZO compression. The sequence file key is the row number of the dataset stored as a LongWritable and the value is the raw data stored as TextWritable.
Any guidance on how to set up a workflow that can process these files? I've searched exhaustively for tips but haven't turned up anything useful...
(I'm a relative n00b to mrjob and Hadoop.)
I finally figured this out. It looks like EMR takes care of the LZO compression for you, but for the sequence file format, you need to add the following HADOOP_INPUT_FORMAT field to your MRJob class:
class MyMRJob(MRJob):
HADOOP_INPUT_FORMAT = 'org.apache.hadoop.mapred.SequenceFileAsTextInputFormat'
def mapper(self, _, line):
# mapper code...
def reducer(self, key, value):
# reducer code...
There's another gotcha, too (quoting from the AWS-hosted Google NGrams page):
The sequence file key is the row number of the dataset stored as a LongWritable and the value is the raw data stored as TextWritable.
That means each row is prepended with and extra Long + TAB, so any line parsing you do in your mapper method needs to account for the prepended info as well.