Search code examples
pythonapache-sparkpysparkapache-spark-sqldate-range

Combine date ranges in Spark dataframe


I have a problem similar to this one.

However, I am dealing with a huge dataset. I was trying to see if I can do the same thing in PySpark instead of pandas. Below is the solution in pandas. Can this be done in PySpark?

def merge_dates(grp):
    # Find contiguous date groups, and get the first/last start/end date for each group.
    dt_groups = (grp['StartDate'] != grp['EndDate'].shift()).cumsum()
    return grp.groupby(dt_groups).agg({'StartDate': 'first', 'EndDate': 'last'})

# Perform a groupby and apply the merge_dates function, followed by formatting.
df = df.groupby(['FruitID', 'FruitType']).apply(merge_dates)
df = df.reset_index().drop('level_2', axis=1) 

Solution

  • We can use a Window and lag function to calculate the contiguous groups and then aggregate those in a similar way as the Pandas function you shared. A working example is given below, hope this helps!

    import pandas as pd
    from dateutil.parser import parse
    from pyspark.sql.window import Window
    import pyspark.sql.functions as F
    
    
    # EXAMPLE DATA -----------------------------------------------
    
    pdf = pd.DataFrame.from_items([('FruitID', [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4]),
                                    ('FruitType', ['Apple', 'Apple', 'Apple', 'Orange', 'Orange', 'Orange', 'Banana', 'Banana', 'Blueberry', 'Mango', 'Kiwi', 'Mango']),
                                    ('StartDate', [parse(x) for x in ['2015-01-01', '2016-01-01', '2017-01-01', '2015-01-01', '2016-05-31',
                                                                      '2017-01-01', '2015-01-01', '2016-01-01', '2017-01-01', '2015-01-01', '2016-09-15', '2017-01-01']]),
                                    ('EndDate', [parse(x) for x in ['2016-01-01', '2017-01-01', '2018-01-01', '2016-01-01', '2017-01-01',
                                                                    '2018-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2016-01-01', '2017-01-01', '2018-01-01']])
                                    ])
    
    pdf.sort_values(['FruitID', 'StartDate'])
    df = sqlContext.createDataFrame(pdf)
    
    
    # FIND CONTIGUOUS GROUPS AND AGGREGATE ---------------------
    
    w = Window.partitionBy("FruitType").orderBy("StartDate")
    contiguous = F.when(F.datediff(F.lag("EndDate", 1).over(w),F.col("StartDate"))!=0,F.lit(1)).otherwise(F.lit(0))
    df = (df
          .withColumn('contiguous_grp', F.sum(contiguous).over(w))
          .groupBy('FruitType','contiguous_grp')
          .agg(F.first('StartDate').alias('StartDate'),F.last('EndDate').alias('EndDate'))
          .drop('contiguous_grp'))
    df.show()
    

    Output:

    +---------+-------------------+-------------------+
    |FruitType|          StartDate|            EndDate|
    +---------+-------------------+-------------------+
    |   Orange|2015-01-01 00:00:00|2016-01-01 00:00:00|
    |   Orange|2016-05-31 00:00:00|2018-01-01 00:00:00|
    |   Banana|2015-01-01 00:00:00|2017-01-01 00:00:00|
    |     Kiwi|2016-09-15 00:00:00|2017-01-01 00:00:00|
    |    Mango|2015-01-01 00:00:00|2016-01-01 00:00:00|
    |    Mango|2017-01-01 00:00:00|2018-01-01 00:00:00|
    |    Apple|2015-01-01 00:00:00|2018-01-01 00:00:00|
    |Blueberry|2017-01-01 00:00:00|2018-01-01 00:00:00|
    +---------+-------------------+-------------------+