Search code examples
hadoophiveavroparquetspark-avro

Hive on spark. Reading parquet file


I'm trying to read parquet file into Hive on Spark.

So I've found out that I should do something kind of that:

CREATE TABLE avro_test ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED 
AS AVRO TBLPROPERTIES ('avro.schema.url'='/files/events/avro_events_scheme.avsc'); 

CREATE EXTERNAL TABLE parquet_test LIKE avro_test STORED AS PARQUET LOCATION '/files/events/parquet_events/';

where my avro scheme is:

{
 "type" : "parquet_file",
    "namespace" : "events",
    "name" : "events",
    "fields" : [
            { "name" : "category" , "type" : "string" },
            { "name" : "duration" , "type" : "long" },
            { "name" : "name" , "type" : "string" },
            { "name" : "user_id" , "type" : "string"},
            { "name" : "value" , "type" : "long" }
    ]
 }

As result I receive an error:

org.apache.spark.sql.catalyst.parser.ParseException: 
Operation not allowed: ROW FORMAT SERDE is incompatible with format 'avro', 
which also specifies a serde(line 1, pos 0)

Solution

  • I think we have to add inputforamt and outputformat classes. 
    
    CREATE TABLE parquet_test
    ROW FORMAT SERDE
       'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
    STORED AS INPUTFORMAT  
      'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
    OUTPUTFORMAT
       'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
    TBLPROPERTIES (
      'avro.schema.url''avro.schema.url'='/hadoop/avro_events_scheme.avsc');
    
    I hope above would work.