I try to write a pyspark dataframe to a parquet like this
df.write.parquet("temp.parquet", mode="overwrite")
but it creates an empty folder named temp.parquet
instead of a parquet file. What might cause this problem?
I downloaded hadoop.dll from here and add it to System32 folder and it solved the problem.