How can I update column values based on a condition in PySpark?

In PySpark, updating column values based on a condition can be achieved by using the `when()` and `otherwise()` functions. The `when()` function allows for the creation of a conditional statement, while the `otherwise()` function specifies the value to be used if the condition is not met. This approach allows for the efficient and flexible updating of column values in a PySpark dataframe, making it a useful tool for data manipulation and analysis tasks.

PySpark: Update Column Values Based on Condition


You can use the following syntax to update column values based on a condition in a PySpark DataFrame:

import pyspark.sql.functions as F#update all values in 'team' column equal to 'A' to now be 'Atlanta'
df = df.withColumn('team', F.when(df.team=='A', 'Atlanta').otherwise(df.team))

This particular example updates all values in the team column equal to ‘A’ to now be ‘Atlanta’ instead.

Any values in the team column not equal to ‘A’ are simply left untouched.

The following examples show how to use this syntax in practice.

Example: Update Column Values Based on Condition in PySpark

Suppose we have the following PySpark DataFrame that contains information about various basketball players:

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()

#define data
data = [['A', 'Guard', 11], 
        ['A', 'Guard', 8], 
        ['A', 'Forward', 22], 
        ['A', 'Forward', 22], 
        ['B', 'Guard', 14], 
        ['B', 'Guard', 14],
        ['B', 'Forward', 13],
        ['B', 'Forward', 7]] 
  
#define column names
columns = ['team', 'position', 'points'] 
  
#create dataframe using data and column names
df = spark.createDataFrame(data, columns) 
  
#view dataframe
df.show()

+----+--------+------+
|team|position|points|
+----+--------+------+
|   A|   Guard|    11|
|   A|   Guard|     8|
|   A| Forward|    22|
|   A| Forward|    22|
|   B|   Guard|    14|
|   B|   Guard|    14|
|   B| Forward|    13|
|   B| Forward|     7|
+----+--------+------+

We can use the following syntax to update all of the values in the team column equal to ‘A’ to now be ‘Atlanta’ instead:

import pyspark.sql.functions as F#update all values in 'team' column equal to 'A' to now be 'Atlanta'
df = df.withColumn('team', F.when(df.team=='A', 'Atlanta').otherwise(df.team))

#view updated DataFrame
df.show()

+-------+--------+------+
|   team|position|points|
+-------+--------+------+
|Atlanta|   Guard|    11|
|Atlanta|   Guard|     8|
|Atlanta| Forward|    22|
|Atlanta| Forward|    22|
|      B|   Guard|    14|
|      B|   Guard|    14|
|      B| Forward|    13|
|      B| Forward|     7|
+-------+--------+------+

From the output we can see that each occurrence of ‘A’ in the team column has been updated to be ‘Atlanta’ instead.

All values in the team column not equal to ‘A’ were simply left the same.

Note: You can find the complete documentation for the PySpark when function .

Additional Resources

The following tutorials explain how to perform other common tasks in PySpark:

x