Search code examples
scalaamazon-web-servicesapache-sparkhadoophive

overwrite hive partitions using spark


I am working with AWS and I have workflows that use Spark and Hive. My data is partitioned by the date, so everyday I have a new partition in my S3 storage. My problem is when one day the load data fails and I have to re-execute that partition. The code that writes is next:

df                            // My data in a Dataframe
  .write
  .format(getFormat(target))  // csv by default, but could be parquet, ORC...
  .mode(getSaveMode("overwrite"))  // Append by default, but in future it should be Overwrite
  .partitionBy(partitionName) // Column of the partition, the date
  .options(target.options)    // header, separator...
  .option("path", target.path) // the path where it will be storage
  .saveAsTable(target.tableName)  // the table name

What happens in my flow? If I use the SaveMode.Overwrite, the complete table will be delete and I will have only the partition saved. If I use the SaveMode.Append I could have duplicate data.

Making a search, I found that Hive support this kind of overwrite, only partition, but using the hql sentences, I don´t have it.

We need the solution on Hive, so we can´t use this alternative option (direct to csv).

I had found this Jira ticket that suppose to solve the problem that I´m having, but trying that with the last version of Spark (2.3.0), the situation was the same. It delete the whole table and save the partition instead of overwrite the partition that my data has.

Trying to make clearer this, this is an example:

Partitioned by A

Data:

| A | B | C | 
|---|---|---| 
| b | 1 | 2 | 
| c | 1 | 2 |

Table:

| A | B | C | 
|---|---|---| 
| a | 1 | 2 | 
| b | 5 | 2 | 

What I want is: In Table, the partition a stay in table, partition b overwrite with the Data, and add the partition c. Is there any solution using Spark that I can do this?

My last option to do this is first deleting the partition that is going to be saved and then use the SaveMode.Append, but I would try this in case there is no other solution.


Solution

  • If you are on Spark 2.3.0, try setting spark.sql.sources.partitionOverwriteMode setting to dynamic, the dataset needs to be partitioned, and the write mode overwrite.

    spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
    data.write.mode("overwrite").insertInto("partitioned_table")