How can I calculate the Mahalanobis Distance in Python?

The Mahalanobis Distance is a commonly used statistical measure for evaluating the similarity between two sets of data. It takes into account the correlation between variables and provides a more accurate measure compared to other distance metrics such as Euclidean distance. Calculating the Mahalanobis Distance in Python involves using the appropriate formula and implementing it in a programming language. This can be achieved by first organizing the data into arrays or matrices, computing the covariance matrix, and then using it to calculate the Mahalanobis Distance. There are also Python libraries such as SciPy and NumPy that provide functions for computing the Mahalanobis Distance, making it a relatively straightforward task for users to incorporate this measure in their analysis.

Calculate Mahalanobis Distance in Python


The Mahalanobis distance is the distance between two points in a multivariate space. It’s often used to find outliers in statistical analyses that involve several variables.

This tutorial explains how to calculate the Mahalanobis distance in Python.

Example: Mahalanobis Distance in Python

Use the following steps to calculate the Mahalanobis distance for every observation in a dataset in Python.

Step 1: Create the dataset.

First, we’ll create a dataset that displays the exam score of 20 students along with the number of hours they spent studying, the number of prep exams they took, and their current grade in the course:

import numpy as np
import pandas as pd 
import scipy as stats

data = {'score': [91, 93, 72, 87, 86, 73, 68, 87, 78, 99, 95, 76, 84, 96, 76, 80, 83, 84, 73, 74],
        'hours': [16, 6, 3, 1, 2, 3, 2, 5, 2, 5, 2, 3, 4, 3, 3, 3, 4, 3, 4, 4],
        'prep': [3, 4, 0, 3, 4, 0, 1, 2, 1, 2, 3, 3, 3, 2, 2, 2, 3, 3, 2, 2],
        'grade': [70, 88, 80, 83, 88, 84, 78, 94, 90, 93, 89, 82, 95, 94, 81, 93, 93, 90, 89, 89]
        }

df = pd.DataFrame(data,columns=['score', 'hours', 'prep','grade'])
df.head()

 score hours prep grade
0   91    16    3    70
1   93     6    4    88
2   72     3    0    80
3   87     1    3    83
4   86     2    4    88

Step 2: Calculate the Mahalanobis distance for each observation.

Next, we will write a short function to calculate the Mahalanobis distance.

#create function to calculate Mahalanobis distance
def mahalanobis(x=None, data=None, cov=None):

    x_mu = x - np.mean(data)
    if not cov:
        cov = np.cov(data.values.T)
    inv_covmat = np.linalg.inv(cov)
    left = np.dot(x_mu, inv_covmat)
    mahal = np.dot(left, x_mu.T)
    return mahal.diagonal()

#create new column in dataframe that contains Mahalanobis distance for each row
df['mahalanobis'] = mahalanobis(x=df, data=df[['score', 'hours', 'prep', 'grade']])

#display first five rows of dataframe
df.head()

 score hours prep grade mahalanobis
0   91    16    3    70   16.501963
1   93     6    4    88    2.639286
2   72     3    0    80    4.850797
3   87     1    3    83    5.201261
4   86     2    4    88    3.828734

Step 3: Calculate the p-value for each Mahalanobis distance.

We can see that some of the Mahalanobis distances are much larger than others. To determine if any of the distances are statistically significant, we need to calculate their p-values.

The p-value for each distance is calculated as the p-value that corresponds to the Chi-Square statistic of the Mahalanobis distance with k-1 degrees of freedom, where k = number of variables. So, in this case we’ll use a degrees of freedom of 4-1 = 3.

from scipy.stats import chi2

#calculate p-value for each mahalanobis distance 
df['p'] = 1 - chi2.cdf(df['mahalanobis'], 3)

#display p-values for first five rows in dataframe
df.head()

 score hours prep grade mahalanobis         p
0   91    16    3    70   16.501963  0.000895
1   93     6    4    88    2.639286  0.450644
2   72     3    0    80    4.850797  0.183054
3   87     1    3    83    5.201261  0.157639
4   86     2    4    88    3.828734  0.280562

Typically a p-value that is less than .001 is considered to be an outlier. We can see that the first observation is an outlier in the dataset because it has a p-value less than .001.

Depending on the context of the problem, you may decide to remove this observation from the dataset since it’s an outlier and could affect the results of the analysis.

x