I have a column of date and a column of count. eg:
Date Count:
3/07/2010 1
2/01/2010 2
1/07/2012 5
I used the code below to change to the data type to date:
func = udf (lambda x: datetime.strptime(x, '%d/%m/%Y'), DateType())
crime_mongodb_df = crime_mongodb_df.withColumn('Reported Date', func(col('Reported Date')))
Then, I want to group the data by year and find the total count per year. I am not sure how to do the grouping. Can I get some help? Thanks!
We can use functions from pyspark.sql.functions
to do all of this, including type change quite easily :)
from pyspark.sql.functions import to_date, col, year
df = spark.createDataFrame([('3/07/2012', 1), ('2/07/2010', 2), ('1/07/2010', 5)], ["Date", "Count"])
df.show()
df.printSchema()
+---------+-----+
| Date|Count|
+---------+-----+
|3/07/2012| 1|
|2/07/2010| 2|
|1/07/2010| 5|
+---------+-----+
root
|-- Date: string (nullable = true)
|-- Count: long (nullable = true)
adjustedDf = df.withColumn("Date", to_date(col("Date"), "d/MM/yyyy"))\
.withColumn('year', year("Date"))
adjustedDf.show()
+----------+-----+----+
| Date|Count|year|
+----------+-----+----+
|2012-07-03| 1|2012|
|2010-07-02| 2|2010|
|2010-07-01| 5|2010|
+----------+-----+----+
adjustedDf.groupBy("year").sum("Count").show()
+----+----------+
|year|sum(Count)|
+----+----------+
|2010| 7|
|2012| 1|
+----+----------+