Mann-Whitney U Test

The Mann-Whitney U Test is a statistical method used to compare two independent groups or samples. It is a non-parametric test, meaning it does not make assumptions about the underlying distribution of the data. This test is commonly used when the data is not normally distributed or when the sample sizes are small. It works by ranking the data from both groups and calculating a U statistic, which measures the difference between the ranks of the two groups. The test then determines whether this difference is statistically significant, providing insight into whether the two groups have different population distributions. This test is often used in research studies and can provide valuable information for making informed decisions.


What is a Mann-Whitney U Test?

The Mann-Whitney U Test is a statistical test used to determine if 2 groups are significantly different from each other on your variable of interest. Your variable of interest should be continuous and your 2 groups should have similar values on your variable of interest. Your 2 groups should be independent (not related to each other) and you should have enough data (more than 5 values in each group, though it also depends on how big the difference is between groups).

The Mann-Whitney U Test compares two different groups on your variable of interest (dependent variable) when your variable of interest is skewed. This means your data is leaning right or left, with most of the data on the edge rather than in the center. This image compares the skewed blue distribution on the left (the median is shown with a vertical line) to the red distribution on the right (the median is also shown with a vertical line).

The Mann-Whitney U Test is also called the Mann-Whitney Wilcoxon Test, Wilcoxon Rank-Sum Test, or the Wilcoxon Mann-Whitney Test)


Assumptions for a Mann-Whitney U Test

Every statistical method has assumptions. Assumptions mean that your data must satisfy certain properties in order for statistical method results to be accurate.

The assumptions for the Mann-Whitney U Test include:

  1. Continuous
  2. Skewed Distribution
  3. Random Sample
  4. Enough Data
  5. Similar Shape Between Groups

Let’s dive in to each one of these separately.

Continuous

The variable that you care about (and want to see if it is different between the two groups) must be continuous. Continuous means that the variable can take on any reasonable value.

Some good examples of continuous variables include age, weight, height, test scores, survey scores, yearly salary, etc.

If the variable that you care about is a proportion (48% of males voted vs 56% of females voted) then you should probably use the Two Proportion Z-Test instead.

Skewed Distribution

The variable that you care about does not need to be bell shaped. In statistics, this is called being normally distributed (it looks like a bell curve when you graph the data). You are free to use a mann-whitney u test when the variable you care about is skewed rather than normally distributed.

A normal distribution is bell shaped with most of the data in the middle as seen on the top of this image. A skewed distribution is leaning left or right with most of the data on the edge as seen on the bottom of this image.

If your variable is normally distributed, you should use the Independent Samples T-Test instead.

Random Sample

The data points for each group in your analysis must have come from a simple random sample. This means that if you wanted to see if drinking sugary soda makes you gain weight, you would need to randomly select a group of soda drinkers for your soda drinker group, and then randomly select a group of non-soda drinkers for your non-soda drinking group.

The key here is that the data points for each group were randomly selected. This is important because if your groups were not randomly determined then your analysis will be incorrect. In statistical terms this is called bias, or a tendency to have incorrect results because of bad data.

If you do not have a random sample, the conclusions you can draw from your results are very limited. You should try to get a simple random sample.If you have paired samples (2 measurements from the same group of subjects) then you should use a Paired Samples T-Test instead.

Enough Data

The sample size (or data set size) should be greater than 5 in each group. Some people argue for more, but more than 5 is probably sufficient.

The sample size also depends on the expected size of the difference between groups. If you expect a large difference between groups, then you can get away with a smaller sample size. If you expect a small difference between groups, then you likely need a larger sample.

How big of a sample do I need to run a Mann-Whitney U Test? For a small effect size, you need 824 participants (412 in each group), for a medium effect size, you need 134 (67 in each group), and for a large effect size, you need 54 (27 in each group).
*sample size calculation was conducted in G*Power with a power of 0.80, critical value (alpha) of 0.05, and 0.20, 0.50, and 0.80 used as the effect size values for small, medium, and large Cohen’s D effect sizes respectively

If your sample size is greater than 30 and your variable of interest is normally distributed (and you know the average and spread of the population), you should run an Independent Samples Z-Test instead.

Similar Shape Between Groups

In order to say that your 2 groups are different based on their average (or median in this case), your 2 groups must be similarly shaped when you graph them as histograms. If they are similarly shaped, you can say the medians (or averages) are different if the Mann-Whitney U Test is significant.

If your 2 groups are not similarly shaped, then you can talk about the difference between the groups in your results, but you cannot argue for a difference in average value (or median).


When to use a Mann-Whitney U Test?

You should use a Mann-Whitney U Test in the following scenario:

  1. You want to know if two groups are different on your variable of interest
  2. Your variable of interest is continuous
  3. You have two and only two groups
  4. You have independent samples
  5. You have a skewed variable of interest

Let’s clarify these to help you know when to use a Mann-Whitney U Test.

Difference

You are looking for a statistical test to see whether two groups are significantly different on your variable of interest. This is a difference question. Other types of analyses include examining the relationship between two variables (correlation) or predicting one variable using another variable (prediction).

Continuous Data

Your variable of interest must be continuous. Continuous means that your variable of interest can basically take on any value, such as heart rate, height, weight, number of ice cream bars you can eat in 1 minute, etc.

Types of data that are NOT continuous include ordered data (such as finishing place in a race, best business rankings, etc.), categorical data (gender, eye color, race, etc.), or binary data (purchased the product or not, has the disease or not, etc.).

Two Groups

A Mann-Whitney U Test can only be used to compare two groups on your variable of interest.

If you have three or more groups, you should use a One Way Anova analysis if your variable of interest is normally distributed or a Kruskal-Wallis One-Way ANOVA if your variable of interest is skewed. If you only have one group and you would like to compare your group to a known or hypothesized population value, you should use a Single Sample T-Test if your variable of interest is normally distributed or a Single Sample Wilcoxon Signed-Rank Test if your variable of interest is skewed.

Independent Samples

Independent samples means that your two groups are not related in any way. For example, if you randomly sample men and then separately randomly sample women to get their heights, the groups should not be related.

If you get a group of students to take a pre-test and the same students to take a post-test, you have two different variables for the same group of students, which would be paired data, in which case you would need to use a Paired Samples T-Test if your variable of interest is normally distributed or a Wilcoxon Signed-Rank Test if your variable of interest is skewed.

Skewed Variable of Interest

Normality was discussed earlier on this page and simply means your plotted data is bell shaped with most of the data in the middle. If you actually would like to prove that your data is normal or skewed, you can use the Kolmogorov-Smirnov test or the Shapiro-Wilk test.


Mann-Whitney U Test Example

Group 1: Received the experimental medical treatment.
Group 2: Received a placebo or control condition.
Variable of interest: Time to recover from the disease in days.

In this example, group 1 is our treatment group because they received the experimental medical treatment. Group 2 is our control group because they received the control condition.

The null hypothesis, which is statistical lingo for what would happen if the treatment does nothing, is that group 1 and group 2 will recover from the disease in about the same number of days, on average. We are trying to determine if receiving the experimental medical treatment will shorten the number of days it takes for patients to recover from the disease.

As we run the experiment, we track how long it takes for each patient to fully recover from the disease. We usually use a Mann-Whitney U Test when our variable of interest is skewed, meaning it is not normally distributed (skewed means leaning left or right with the majority of the data on the edge). In this case, recovery from the disease in days is skewed for both groups.

After the experiment is over, we compare the two groups on our variable of interest (days to fully recover) using a Mann-Whitney U Test. When we run the analysis, we get a W-statistic and a p-value.

The W-statistic is a measure of how different the two groups are on our recovery variable of interest. A p-value is the chance of seeing our results assuming the treatment actually doesn’t do anything. A p-value less than or equal to 0.05 means that our result is statistically significant and we can trust that the difference is not due to chance alone.

x