Search code examples
javascalaapache-sparkparquetspark-csv

Parquet schema and Spark


I am trying to convert CSV files to parquet and i am using Spark to accomplish this.

SparkSession spark = SparkSession
    .builder()
    .appName(appName)
    .config("spark.master", master)
    .getOrCreate();

Dataset<Row> logFile = spark.read().csv("log_file.csv");
logFile.write().parquet("log_file.parquet");

Now the problem is i don't have a schema defined and columns look like this (output displayed using printSchema() in spark)

root
 |-- _c0: string (nullable = true)
 |-- _c1: string (nullable = true)
 |-- _c2: string (nullable = true)
 ....

the csv has the names on the first row but they're ignored i guess, the problem is only a few columns are strings, i also have ints and dates.

I am only using Spark, no avro or anything else basically (never used avro).

What are my options to define a schema and how? If i need to write the parquet file in another way then no problem as long as it's a quick an easy solution.

(i am using spark standalone for tests / don't know scala)


Solution

  • Try using the .option("inferschema","true") present Spark-csv package. This will automatically infer the schema from the data.

    You can also define a custom schema for your data using struct type and use the .schema(schema_name) to read the on the basis of a custom schema.

    val sqlContext = new SQLContext(sc)
    val customSchema = StructType(Array(
        StructField("year", IntegerType, true),
        StructField("make", StringType, true),
        StructField("model", StringType, true),
        StructField("comment", StringType, true),
        StructField("blank", StringType, true)))
    
    val df = sqlContext.read
        .format("com.databricks.spark.csv")
        .option("header", "true") // Use first line of all files as header
        .schema(customSchema)
        .load("cars.csv")