How do you drop duplicates in Pandas across multiple columns?

Dropping duplicates in Pandas across multiple columns can be done by using the DataFrame.drop_duplicates() function. This function takes an optional “subset” argument which can be used to specify the columns that should be considered when searching for duplicates. The duplicates will be dropped based on the columns specified in the “subset” argument. The default behavior is to consider all columns when searching for duplicates. The inplace argument can be used to modify the DataFrame in place instead of returning a new DataFrame.


You can use the following methods to drop duplicate rows across multiple columns in a pandas DataFrame:

Method 1: Drop Duplicates Across All Columns

df.drop_duplicates()

Method 2: Drop Duplicates Across Specific Columns

df.drop_duplicates(['column1', 'column3'])

The following examples show how to use each method in practice with the following pandas DataFrame:

import pandas as pd

#create DataFrame
df = pd.DataFrame({'region': ['East', 'East', 'East', 'West', 'West', 'West'],
                   'store': [1, 1, 2, 1, 2, 2],
                   'sales': [5, 5, 7, 9, 12, 8]})

#view DataFrame
print(df)

  region  store  sales
0   East      1      5
1   East      1      5
2   East      2      7
3   West      1      9
4   West      2     12
5   West      2      8

Example 1: Drop Duplicates Across All Columns

The following code shows how to drop rows that have duplicate values across all columns:

#drop rows that have duplicate values across all columns
df.drop_duplicates()

	region	store	sales
0	East	1	5
2	East	2	7
3	West	1	9
4	West	2	12
5	West	2	8

The row in index position 1 had the same values across all columns as the row in index position 0, so it was dropped from the DataFrame.

By default, pandas keeps the first duplicate row. However, you can use the keep argument to specify to keep the last duplicate row instead:

#drop rows that have duplicate values across all columns (keep last duplicate)
df.drop_duplicates(keep='last')

	region	store	sales
1	East	1	5
2	East	2	7
3	West	1	9
4	West	2	12
5	West	2	8

Example 2: Drop Duplicates Across Specific Columns

You can use the following code to drop rows that have duplicate values across only the region and store columns:

#drop rows that have duplicate values across region and store columns
df.drop_duplicates(['region', 'store'])

	region	store	sales
0	East	1	5
2	East	2	7
3	West	1	9
4	West	2	12

A total of two rows were dropped from the DataFrame because they had duplicate values in the region and store columns.

x