Search code examples
hadoopapache-sparkelastic-map-reducelzo

Spark/Hadoop throws exception for large LZO files


I'm running an EMR Spark job on some LZO-compressed log-files stored in S3. There are several logfiles stored in the same folder, e.g.:

...
s3://mylogfiles/2014-08-11-00111.lzo
s3://mylogfiles/2014-08-11-00112.lzo
...

In the spark-shell I'm running a job that counts the lines in the files. If I count the lines individually for each file, there is no problem, e.g. like this:

// Works fine
...
sc.textFile("s3://mylogfiles/2014-08-11-00111.lzo").count()
sc.textFile("s3://mylogfiles/2014-08-11-00112.lzo").count()
...

If I use a wild-card to load all the files with a one-liner, I get two kinds of exceptions.

// One-liner throws exceptions
sc.textFile("s3://mylogfiles/*.lzo").count()

The exceptions are:

java.lang.InternalError: lzo1x_decompress_safe returned: -6
    at com.hadoop.compression.lzo.LzoDecompressor.decompressBytesDirect(Native Method)

and

java.io.IOException: Compressed length 1362309683 exceeds max block size 67108864 (probably corrupt file)
    at com.hadoop.compression.lzo.LzopInputStream.getCompressedData(LzopInputStream.java:291)

It seems to me that the solution is hinted by the text given with the last exception, but I don't know how to proceed. Is there a limit to how big LZO files are allowed to be, or what is the issue?

My question is: Can I run Spark queries that load all LZO-compressed files in an S3 folder, without getting I/O related exceptions?

There are 66 files of roughly 200MB per file.

EDIT: The exception only occurs when running Spark with Hadoop2 core libs (ami 3.1.0). When running with Hadoop1 core libs (ami 2.4.5), things work fine. Both cases were tested with Spark 1.0.1.


Solution

  • I haven't run into this specific issue myself, but it looks like .textFile expects files to be splittable, much like the Cedrik's problem of Hive insisting on using CombineFileInputFormat

    You could either index your lzo files, or try using the LzoTextInputFormat - I'd be interested to hear if that works better on EMR:

    sc.newAPIHadoopFile("s3://mylogfiles/*.lz", 
        classOf[com.hadoop.mapreduce.LzoTextInputFormat],
        classOf[org.apache.hadoop.io.LongWritable],
        classOf[org.apache.hadoop.io.Text])
      .map(_._2.toString) // if you just want a RDD[String] without writing a new InputFormat
      .count