What Is Considered a Low Standard Deviation?

A low standard deviation indicates that the data points in a data set are close to the mean of the data set. This means that the data is relatively consistent and there is not much variation between the data points. A low standard deviation is often used to measure how well a data set is distributed and is a useful measure for understanding the average behavior of a data set.


The standard deviation is used to measure the spread of values in a sample.

We can use the following formula to calculate the standard deviation of a given sample:

Σ(xi – xbar)2 / (n-1)

where:

  • Σ: A symbol that means “sum”
  • xi: The ith value in the sample
  • xbar: The mean of the sample
  • n: The sample size

The higher the value for the standard deviation, the more spread out the values are in a . Conversely, the lower the value for the standard deviation, the more closely packed together the values.

One question students often have is: What is considered a low value for the standard deviation?

The answer: There is no cut-off value for what is considered a “low” standard deviation because it depends on the type of data you’re working with.

For example, consider the following scenarios:

Scenario 1: A professor collects data on the exam scores of students in his class and finds that the standard deviation of exam scores is 7.8.

Scenario 2: An economist measures the total income tax collected by different countries around the world and finds that the standard deviation of total income tax collected is $1.2 million.

The standard deviation in scenario 2 is much higher, but that’s only because the values being measured in scenario 2 are considerably higher than those being measured in scenario 1.

This means there is no single number we can use to tell whether or not a standard deviation is “low” or not. It all depends on the situation.

Using the Coefficient of Variation

One way to determine if a standard deviation is “low” is to compare it to the mean of the dataset.

A coefficient of variation, often abbreviated CV, is a way to measure how spread out values are in a dataset relative to the mean. It is calculated as:

CV = s / x

  • s: The standard deviation of dataset
  • x: The mean of dataset

The lower the CV, the lower the standard deviation relative to the mean.

For example, suppose a professor collects data on the exam scores of students and finds that the mean score is 80.3 and the standard deviation of scores is 7.8. The CV would be calculated as:

  • CV: 7.8 / 80.3 = .097

Suppose another professor at a different university collects data on the exam scores of his students and finds that the mean score is 70.3 and the standard deviation of scores is 8.5. The CV would be calculated as:

  • CV: 8.5 / 90.2 = 0.094

Although the standard deviation of exam scores is lower for the first professor’s students, the coefficient of variation is actually higher than that of the exam scores for the second professor’s students.

This means the variation of exam scores relative to the mean score is higher for the first professor’s students.

Comparing Standard Deviations Between Samples

Rather than classifying a standard deviation as “low” or not, often we simply compare the standard deviation between several samples to determine which sample has the lowest standard deviation.

For example, suppose a professor administers three exams to his students during the course of one semester. He then calculates the sample standard deviation of scores for each exam:

  • Sample standard deviation of Exam 1 Scores: 4.9
  • Sample standard deviation of Exam 2 Scores: 14.4
  • Sample standard deviation of Exam 3 Scores: 2.5

The professor can see that Exam 3 had the lowest standard deviation of scores among all three exams, which means the exam scores were most closely packed together for that exam.

Conversely, he can see that Exam 2 had the highest standard deviation, which means the exam scores were most spread out for that exam.

x