Search code examples
replacenullpysparkmedian

Replace Null values with median in pyspark


How can I replace null values with median in the columns Age and Height below data set df.

df = spark.createDataFrame([(1, 'John', 1.79, 28,'M', 'Doctor'),
                        (2, 'Steve', 1.78, 45,'M', None),
                        (3, 'Emma', 1.75, None, None, None),
                        (4, 'Ashley',1.6, 33,'F', 'Analyst'),
                        (5, 'Olivia', 1.8, 54,'F', 'Teacher'),
                        (6, 'Hannah', 1.82, None, 'F', None),
                        (7, 'William',None, 42,'M', 'Engineer'),
                        (None,None,None,None,None,None),
                        (8,'Ethan',1.55,38,'M','Doctor'),
                        (9,'Hannah',1.65,None,'F','Doctor'),
                       (10,'Xavier',1.64,43,None,'Doctor')]
                       , ['Id', 'Name', 'Height', 'Age', 'Gender', 'Profession'])

In the post Replace missing values with mean - Spark Dataframe I used the function given from pyspark.ml.feature import Imputer

imputer = Imputer(
inputCols=df.columns, 
outputCols=["{}_imputed".format(c) for c in df.columns])

imputer.fit(df).transform(df)

It throws me an error.

IllegalArgumentException: 'requirement failed: Column Id must be of type equal to one of the following types: [DoubleType, FloatType] but was actually of type LongType.'

So please help. Thank you


Solution

  • It's likely an initial casting error (I had some strings I needed to be floats). To convert all cols to floats do:

    from pyspark.sql.functions import col
    df = df.select(*(col(c).cast("float").alias(c) for c in df.columns))
    

    Then you should be fine to impute. Note: I set my strategy to median rather than mean.

    from pyspark.ml.feature import Imputer
    
    imputer = Imputer(
        inputCols=df.columns, 
        outputCols=["{}_imputed".format(c) for c in df.columns]
        ).setStrategy("median")
    
    # Add imputation cols to df
    df = imputer.fit(df).transform(df)