Search code examples
hadoophiveorc

appending to ORC file


I'm new to Big data and related technologies, so I'm unsure if we can append data to the existing ORC file. I'm writing the ORC file using Java API and when I close the Writer, I'm unable to open the file again to write new content to it, basically to append new data.

Is there a way I can append data to the existing ORC file, either using Java Api or Hive or any other means?

One more clarification, when saving Java util.Date object into ORC file, ORC type is stored as:

struct<timestamp:struct<fasttime:bigint,cdate:struct<cachedyear:int,cachedfixeddatejan1:bigint,cachedfixeddatenextjan1:bigint>>,

and for java BigDecimal it's:

<margin:struct<intval:struct<signum:int,mag:struct<>,bitcount:int,bitlength:int,lowestsetbit:int,firstnonzerointnum:int>

Are these correct and is there any info on this?


Solution

  • No, you cannot append directly to an ORC file. Nor to a Parquet file. Nor to any columnar format with a complex internal structure with metadata interleaved with data.

    Quoting the official "Apache Parquet" site...

    Metadata is written after the data to allow for single pass writing.

    Then quoting the official "Apache ORC" site...

    Since HDFS does not support changing the data in a file after it is written, ORC stores the top level index at the end of the file (...) The file’s tail consists of 3 parts; the file metadata, file footer and postscript.

    Well, technically, nowadays you can append to an HDFS file; you can even truncate it. But these tricks are only useful for some edge cases (e.g. Flume feeding messages into an HDFS "log file", micro-batch-wise, with fflush from time to time).

    For Hive transaction support they use a different trick: creating a new ORC file on each transaction (i.e. micro-batch) with periodic compaction jobs running in the background, à la HBase.