Search code examples
pysparkapache-spark-sqlmemory-efficient

What is efficient way to make a new dataframe (PySpark)?


I have a dataframe like:

+---------------+-------+
|  date  |  ID  | count |
+--------+------+-------+
|20170101| 258  |  1003 |
|20170102| 258  |  13   |
|20170103| 258  |  1    |
|20170104| 258  |  108  |
|20170109| 258  |  25   |
|  ...   | ...  |  ...  |
|20170101| 2813 |  503  |
|20170102| 2813 |  139  |
|  ...   | ...  |  ...  |
|20170101| 4963 |  821  |
|20170102| 4963 |  450  |
|  ...   | ...  |  ...  |
+--------+------+-------+

in my dataframe, there's not some data.

For example, here, date 20170105 ~ 20170108 for ID 258 are missing

and missing data means not appear(= count == 0).

But I'd like to add data which count is 0 too, like this:

+---------------+-------+
|  date  |  ID  | count |
+--------+------+-------+
|20170101| 258  |  1003 |
|20170102| 258  |  13   |
|20170103| 258  |  1    |
|20170104| 258  |  108  |
|20170105| 258  |  0    |
|20170106| 258  |  0    |
|20170107| 258  |  0    |
|20170108| 258  |  0    |
|20170109| 258  |  25   |
|  ...   | ...  |  ...  |
|20170101| 2813 |  503  |
|20170102| 2813 |  139  |
|  ...   | ...  |  ...  |
|20170101| 4963 |  821  |
|20170102| 4963 |  450  |
|  ...   | ...  |  ...  |
+--------+------+-------+

dataframe is immutable so, if I want to add zero counted data to this dataframe, have to make a new dataframe.

But even if I have a duration(20170101 ~ 20171231) and ID list, I cannot use for loop to dataframe.

How can I make a new dataframe?

ps. what I already tried was to make a correct dataframe and then compare 2 dataframes, make another dataframe which has only 0 counted data. finally union "original dataframe" and "0 counted dataframe". I think this is not good and long process. please recommend me some other efficient solutions.


Solution

  • from pyspark.sql.functions import unix_timestamp, from_unixtime, struct, datediff, lead, col, explode, lit, udf
    from pyspark.sql.window import Window
    from pyspark.sql.types import ArrayType, DateType
    from datetime import timedelta
    
    #sample data
    df = sc.parallelize([
        ['20170101', 258, 1003],
        ['20170102', 258, 13],
        ['20170103', 258, 1],
        ['20170104', 258, 108],
        ['20170109', 258, 25],
        ['20170101', 2813, 503],
        ['20170102', 2813, 139],
        ['20170101', 4963, 821],
        ['20170102', 4963, 450]]).\
        toDF(('date', 'ID', 'count')).\
        withColumn("date", from_unixtime(unix_timestamp('date', 'yyyyMMdd')).cast('date'))
    df.show()
    
    def date_list_fn(d):
        return [d[0] + timedelta(days=x) for x in range(1, d[1])]
    date_list_udf = udf(date_list_fn, ArrayType(DateType()))
    
    w =  Window.partitionBy('ID').orderBy('date')
    
    #dataframe having missing date
    df_missing = df.withColumn("diff", datediff(lead('date').over(w), 'date')).\
                    filter(col("diff") > 1).\
                    withColumn("date_list", date_list_udf(struct("date", "diff"))).\
                    withColumn("date_list", explode(col("date_list"))).\
                    select(col("date_list").alias("date"), "ID", lit(0).alias("count"))
    
    #final dataframe by combining sample data with missing date dataframe
    final_df = df.union(df_missing).sort(col("ID"), col("date"))
    final_df.show()
    

    Sample data:

    +----------+----+-----+
    |      date|  ID|count|
    +----------+----+-----+
    |2017-01-01| 258| 1003|
    |2017-01-02| 258|   13|
    |2017-01-03| 258|    1|
    |2017-01-04| 258|  108|
    |2017-01-09| 258|   25|
    |2017-01-01|2813|  503|
    |2017-01-02|2813|  139|
    |2017-01-01|4963|  821|
    |2017-01-02|4963|  450|
    +----------+----+-----+
    

    Output is:

    +----------+----+-----+
    |      date|  ID|count|
    +----------+----+-----+
    |2017-01-01| 258| 1003|
    |2017-01-02| 258|   13|
    |2017-01-03| 258|    1|
    |2017-01-04| 258|  108|
    |2017-01-05| 258|    0|
    |2017-01-06| 258|    0|
    |2017-01-07| 258|    0|
    |2017-01-08| 258|    0|
    |2017-01-09| 258|   25|
    |2017-01-01|2813|  503|
    |2017-01-02|2813|  139|
    |2017-01-01|4963|  821|
    |2017-01-02|4963|  450|
    +----------+----+-----+