I want to save/write/upload a spark dataframe from databricks onto the azure data lack store folder using R. I found the following python code.
spark_df.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").mode("overwrite").save('...path to azure data lake store folder')
Can you suggest me a SparkR equivalent of this code?
This should be:
spark_df %>%
coalesce(1L) %>% # Same as coalesce(1).
write.df( # Generic writer, because there is no csv specific one
"...path to azure...", # Path as before
source = "csv", # Since 2.0 you don't need com.databricks
mode = "overwrite",
header = "true" # All ... are used as options
)