What is the success/failure condition in statistics?

The success/failure condition in statistics refers to the criteria used to determine whether a particular event or experiment can be considered a success or a failure. This condition is typically based on a predetermined threshold or standard, and is used to evaluate the outcome of a statistical analysis or hypothesis test. In general, if the observed data falls within the predetermined threshold, the condition is considered a success, while if it falls outside the threshold, it is considered a failure. This condition is important in determining the validity and significance of statistical results, and helps to guide decision-making and draw conclusions from data.

What is the Success/Failure Condition in Statistics?


Bernoulli trial is an experiment with only two possible outcomes – “success” or “failure” – and the probability of success is the same each time the experiment is conducted.

An example of a Bernoulli trial is a coin flip. The coin can only land on two sides (we could call heads a “success” and tails a “failure”) and the probability of success on each flip is 0.5, assuming the coin is fair.

Often in statistics when we want to calculate probabilities involving more than just a few Bernoulli trials, we use the normal distribution as an approximation. However, in order to do so we must check that the Success/Failure Condition is met:

Success/Failure Condition: There should be at least 10 expected successes and 10 expected failures in a sample in order to use the normal distribution as an approximation.

Written using notation, we must verify both of the following:

  • Expected number of successes is at least 10: np ≥ 10
  • Expected number of failures is at least 10: n(1-p) ≥ 10

where is the sample size and is the probability of success on a given trial.

Note: Some textbooks instead say that only 5 expected successes and 5 expected failures are needed in order to use the normal approximation. However, 10 is more commonly used and it is a more conservative number, thus we’ll use that number in this tutorial.

Example: Checking the Success/Failure Condition

Suppose we would like to create a confidence interval for the proportion of residents in a county that are in favor of a certain law. We select a random sample of 100 residents and ask them about their stance on the law. Here are the results:

  • Sample size n = 100
  • Proportion in favor of law p = 0.56

We would like to use the following formula to calculate the confidence interval:

Confidence Interval = p  +/-  z*√p(1-p) / n

where:

  • p: sample proportion
  • z: the z-value that corresponds to the normal distribution
  • n: sample size

This formula uses a z-value, which comes from the normal distribution. Thus, in this formula we’re using the normal distribution to approximate the binomial distribution.

However, in order to do so we need to check that the Success/Failure Condition is met. Let’s verify that both the number of successes and the number of failures in the sample are at least 10:

Number of failures: n(1-p) = 100*(1-.56) = 44

Both numbers are equal to or greater than 10, so we’re okay to proceed with the formula shown above to calculate the confidence interval.

Additional Resources

Another condition that must be met in order to use the normal distribution as an approximation to the binomial distribution is that the sample size we’re working with does not exceed 10% of the population size. This is known as The 10% Condition.

Also keep in mind that if you’re working with two proportions (e.g. creating a confidence interval for the difference between proportions), you must verify that the expected number of successes and failures in both samples are at least 10.

x