<< Chapter < Page Chapter >> Page >

This chapter introduces a new probability density function, the F distribution. This distribution is used for many applications including ANOVA and for testing equality across multiple means. We begin with the F distribution and the test of hypothesis of differences in variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be the same. A supermarket might be interested in the variability of check-out times for two checkers.

In order to perform a F test of two variances, it is important that the following are true:

  1. The populations from which the two samples are drawn are normally distributed.
  2. The two populations are independent of each other.

Unlike most other tests in this book, the F test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, the test can give a biased result for the test statistic.

Suppose we sample randomly from two independent normal populations. Let σ 1 2 and σ 2 2 be the population variances and s 1 2 and s 2 2 be the sample variances. Let the sample sizes be n 1 and n 2 . Since we are interested in comparing the two sample variances, we use the F ratio:

F = [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ]

F has the distribution F ~ F ( n 1 – 1, n 2 – 1)

where n 1 – 1 are the degrees of freedom for the numerator and n 2 – 1 are the degrees of freedom for the denominator.

If the null hypothesis is σ 1 2 = σ 2 2 , then the F Ratio becomes F = [ s 1 2 σ 1 2 ] [ s 2 2 σ 2 2 ] = s 1 2 s 2 2

H 0 : σ 1 2 σ 2 2 = δ 0
H a : σ 1 2 σ 2 2 δ 0

if δ 0 =1 then

H 0 : σ 1 2 = σ 2 2
H a : σ 1 2 σ 2

Test statistic is :

F c = S 1 2 S 2 2

The various forms of the hypotheses tested are:

Two-Tailed Test One-Tailed Test One-Tailed Test
H 0 : σ 1 2 = σ 2 2 H 0 : σ 1 2 ≤ σ 2 2 H 0 : σ 1 2 ≥ σ 2 2
H 1 : σ 1 2 ≠ σ 2 2 H 1 : σ 1 2 2 2 H 1 : σ 1 2 2 2

A more general form of the null and alternative hypothesis for a two tailed test would be :

H 0 : σ 1 2 σ 2 2 = δ 0
H a : σ 1 2 σ 2 2 δ 0

Where if δ 0 = 1 it is a simple test of the hypothesis that the two variances are equal. This form of the hypothesis does have the benefit of allowing for tests that are more than for simple differences and can accommodate tests for specific differences as we did for differences in means and proportions. This form of the hypothesis also shows the relationship between the F distribution and the χ 2 : the F is a ratio of two chi squared distributions. This is helpful in determining the degrees of freedom of the resultant F distribution.

If the two populations have equal variances, then s 1 2 and s 2 2 are close in value and the test statistic, F c = s 1 2 s 2 2 is close to one. But if the two population variances are very different, s 1 2 and s 2 2 tend to be very different, too. Choosing s 1 2 as the larger sample variance causes the ratio s 1 2 s 2 2 to be greater than one. If s 1 2 and s 2 2 are far apart, then F c = s 1 2 s 2 2 is a large number.

Therefore, if F is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if F is much larger than one, then the evidence is against the null hypothesis.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introductory statistics. OpenStax CNX. Aug 09, 2016 Download for free at http://legacy.cnx.org/content/col11776/1.26
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introductory statistics' conversation and receive update notifications?

Ask