How to Calculate Mean Absolute Error in Python?


In statistics, the mean absolute error (MAE) is a way to measure the accuracy of a given model. It is calculated as:

MAE = (1/n) * Σ|yi – xi|

where:

  • Σ: A Greek symbol that means “sum”
  • yi: The observed value for the ith observation
  • xi: The predicted value for the ith observation
  • n: The total number of observations

We can easily calculate the mean absolute error in Python by using the function from Scikit-learn.

This tutorial provides an example of how to use this function in practice.

Example: Calculating Mean Absolute Error in Python

Suppose we have the following arrays of actual values and predicted values in Python:

actual = [12, 13, 14, 15, 15, 22, 27]
pred = [11, 13, 14, 14, 15, 16, 18]

The following code shows how to calculate the mean absolute error for this model:

from sklearn.metrics import mean_absolute_error as mae

#calculate MAE
mae(actual, pred)

2.4285714285714284

The mean absolute error (MAE) turns out to be 2.42857.

This tells us that the average difference between the actual data value and the value predicted by the model is 2.42857.

We can compare this MAE to the MAE obtained by other forecast models to see which models perform best.

The lower the MAE for a given model, the more closely the model is able to predict the actual values.

Note: The array of actual values and the array of predicted values should both be of equal length in order for this function to work correctly.

x