I want to do something like this:
df.replace('empty-value', None, 'NAME')
Basically, I want to replace some value with NULL
, but it does not accept None
as an argument. How can I do this?
This will replace empty-value
with None
in your name
column:
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
df = sc.parallelize([(1, "empty-value"), (2, "something else")]).toDF(["key", "name"])
new_column_udf = udf(lambda name: None if name == "empty-value" else name, StringType())
new_df = df.withColumn("name", new_column_udf(df.name))
new_df.collect()
Output:
[Row(key=1, name=None), Row(key=2, name=u'something else')]
By using the old name as the first parameter in withColumn
, it actually replaces the old name
column with the new one generated by the UDF output.