How can I group a PySpark DataFrame by year?


You can use the following syntax to group rows by year in a PySpark DataFrame:

from pyspark.sql.functions import year, sum

df.groupBy(year('date').alias('year')).agg(sum('sales').alias('sum_sales')).show()

This particular example groups the rows of the DataFrame by year based on the date in the date column and then calculates the sum of the values in the sales column by year.

The following example shows how to use this syntax in practice.

Example: How to Group by Year in PySpark

Suppose we have the following PySpark DataFrame that contains information about the sales made on various days at some company:

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()

#define data
data = [['2021-04-11', 22],
        ['2021-04-15', 14],
        ['2021-04-17', 12],
        ['2022-05-21', 15],
        ['2022-05-23', 30],
        ['2023-10-26', 45],
        ['2023-10-28', 32],
        ['2023-10-29', 47]]
  
#define column names
columns = ['date', 'sales']
  
#create dataframe using data and column names
df = spark.createDataFrame(data, columns) 
  
#view dataframe
df.show()

+----------+-----+
|      date|sales|
+----------+-----+
|2021-04-11|   22|
|2021-04-15|   14|
|2021-04-17|   12|
|2022-05-21|   15|
|2022-05-23|   30|
|2023-10-26|   45|
|2023-10-28|   32|
|2023-10-29|   47|
+----------+-----+

Suppose we would like to calculate the sum of the sales, grouped by year.

We can use the following syntax to do so:

from pyspark.sql.functions import year, sum

#calculate sum of sales by year
df.groupBy(year('date').alias('year')).agg(sum('sales').alias('sum_sales')).show()

+----+---------+
|year|sum_sales|
+----+---------+
|2021|       48|
|2022|       45|
|2023|      124|
+----+---------+

The resulting DataFrame shows the sum of sales by year.

For example, we can see:

  • The sum of sales for 2021 is 48.
  • The sum of sales for 2022 is 45.
  • The sum of sales for 2023 is 124.

Note that you can also aggregate sales by a different metric if you’d like.

For example, you could use the following syntax to calculate the total count of sales, grouped by year:

from pyspark.sql.functions import year, count

#calculate count of sales by year
df.groupBy(year('date').alias('year')).agg(count('sales').alias('cnt_sales')).show()

+----+---------+
|year|cnt_sales|
+----+---------+
|2021|        3|
|2022|        2|
|2023|        3|
+----+---------+

The resulting DataFrame now shows the count of sales by year.

Additional Resources

The following tutorials explain how to perform other common tasks in PySpark:

x