As mentioned in many other locations on the web, adding a new column to an existing DataFrame is not straightforward. Unfortunately it is important to have this functionality (even though it is inefficient in a distributed environment) especially when trying to concatenate two DataFrame
s using unionAll
.
What is the most elegant workaround for adding a null
column to a DataFrame
to facilitate a unionAll
?
My version goes like this:
from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
All you need here is importing StringType
and using lit
and cast
:
from pyspark.sql.types import StringType
from pyspark.sql.functions import lit
new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))
A full example:
df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
# root
# |-- foo: long (nullable = true)
# |-- bar: string (nullable = true)
new_df = df.withColumn('new_column', lit(None).cast(StringType()))
new_df.printSchema()
# root
# |-- foo: long (nullable = true)
# |-- bar: string (nullable = true)
# |-- new_column: string (nullable = true)
new_df.show()
# +---+---+----------+
# |foo|bar|new_column|
# +---+---+----------+
# | 1| 2| null|
# | 2| 3| null|
# +---+---+----------+
A Scala equivalent can be found here: Create new Dataframe with empty/null field values