Search code examples
javaapache-sparkavrospark-avro

How to convert nested avro GenericRecord to Row


I have a code to convert my avro record to Row using function avroToRowConverter()

directKafkaStream.foreachRDD(rdd -> {
        JavaRDD<Row> newRDD= rdd.map(x->{

            Injection<GenericRecord, byte[]> recordInjection = GenericAvroCodecs.toBinary(SchemaRegstryClient.getLatestSchema("poc2"));
            return avroToRowConverter(recordInjection.invert(x._2).get());
            });

This function is not working for nested schema (TYPE= UNION).

private static Row avroToRowConverter(GenericRecord avroRecord) {
    if (null == avroRecord) {
        return null;
    }
    //GenericData
    Object[] objectArray = new Object[avroRecord.getSchema().getFields().size()];
    StructType structType = (StructType) SchemaConverters.toSqlType(avroRecord.getSchema()).dataType();
    for (Schema.Field field : avroRecord.getSchema().getFields()) {

        if(field.schema().getType().toString().equalsIgnoreCase("STRING") || field.schema().getType().toString().equalsIgnoreCase("ENUM")){
            objectArray[field.pos()] = ""+avroRecord.get(field.pos());
        }else {
            objectArray[field.pos()] = avroRecord.get(field.pos());
        }
    }

    return new GenericRowWithSchema(objectArray, structType);
}

Can anyone suggest how can I convert complex schema to ROW?


Solution

  • There is SchemaConverters.createConverterToSQL but it is private unfortunately. There are PRs to make it public, but they were never merged:

    There's a workaround though that we used.

    You can expose it by creating a class in com.databricks.spark.avro package:

    package com.databricks.spark.avro
    
    import org.apache.avro.Schema
    import org.apache.avro.generic.GenericRecord
    import org.apache.spark.sql.Row
    import org.apache.spark.sql.types.DataType
    
    object MySchemaConversions {
      def createConverterToSQL(avroSchema: Schema, sparkSchema: DataType): (GenericRecord) => Row =
        SchemaConverters.createConverterToSQL(avroSchema, sparkSchema).asInstanceOf[(GenericRecord) => Row]
    }
    

    Then you can use it in your code like this:

    final DataType myAvroType = SchemaConverters.toSqlType(MyAvroRecord.getClassSchema()).dataType();
    
    final Function1<GenericRecord, Row> myAvroRecordConverter =
            MySchemaConversions.createConverterToSQL(MyAvroRecord.getClassSchema(), myAvroType);
    
    Row[] convertAvroRecordsToRows(List<GenericRecord> records) {
        return records.stream().map(myAvroRecordConverter::apply).toArray(Row[]::new);
    }
    

    For one record you can just call it like this:

    final Row row = myAvroRecordConverter.apply(record);