<< Chapter < Page Chapter >> Page >
This course is a short series of lectures on Introductory Statistics. Topics covered are listed in the Table of Contents. The notes were prepared by EwaPaszek and Marek Kimmel. The development of this course has been supported by NSF 0203396 grant.

Test of the equality of two independent normal distributions

Let X and Y have independent normal distributions N ( μ x , σ x 2 ) and N ( μ y , σ y 2 ) , respectively. There are times when we are interested in testing whether the distribution of X and Y are the same. So if the assumption of normality is valid, we would be interested in testing whether the two variances are equal and whether the two mean are equal.

Let first consider a test of the equality of the two means. When X and Y are independent and normally distributed, we can test hypotheses about their means using the same t -statistic that was used previously. Recall that the t -statistic used for constructing the confidence interval assumed that the variances of X and Y are equal. That is why we shall later consider a test for the equality of two variances.

Let start with an example and then let give a table that lists some hypotheses and critical regions.

A botanist is interested in comparing the growth response of dwarf pea stems to two different levels of the hormone indoeacetic acid (IAA). Using 16-day-old pea plants, the botanist obtains 5-millimeter sections and floats these sections with different hormone concentrations to observe the effect of the hormone on the growth of the pea stem.

Let X and Y denote, respectively, the independent growths that can be attributed to the hormone during the first 26 hours after sectioning for ( 0.5 ) ( 10 ) 4 and ( 10 ) 4 levels of concentration of IAA. The botanist would like to test the null hypothesis H 0 : μ x μ y = 0 against the alternative hypothesis H 1 : μ x μ y < 0 . If we can assume X and Y are independent and normally distributed with common variance, respective random samples of size n and m give a test based on the statistic

T = X ¯ Y ¯ { [ ( n 1 ) S x 2 + ( m 1 ) S Y y 2 ] / ( n + m 2 ) } ( 1 / n + 1 / m ) = X ¯ Y ¯ S P 1 / n + 1 / m ,

where S P = ( n 1 ) S X 2 + ( m 1 ) S Y 2 n + m 2 .

T has a t distribution with r = n + m 2 degrees of freedom when H 0 is true and the variances are (approximately) equal. The hypothesis Ho will be rejected in favor of H 1 if the observed value of T is less than t α ( n + m 2 ) .

Got questions? Get instant answers now!

In the example 1 , the botanist measured the growths of pea stem segments, in millimeters, for n =11 observations of X given in the Table 1:

Table 1
0.8 1.8 1.0 0.1 0.9 1.7 1.0 1.4 0.9 1.2 0.5

and m =13 observations of Y given in the Table 2:

Table 2
1.0 0.8 1.6 2.6 1.3 1.1 2.4 1.8 2.5 1.4 1.9 2.0 1.2

For these data, x ¯ = 1.03 , s x 2 = 0.24 , y ¯ = 1.66 , and s y 2 = 0.35 . The critical region for testing H 0 : μ x μ y = 0 against H 1 : μ x μ y < 0 is t t 0.05 ( 22 ) = 1.717 . Since H 0 is clearly rejected at α =0.05 significance level.

Got questions? Get instant answers now!
an approximate p -value of this test is 0.005 because t 0.05 ( 22 ) = 2.819 . Also, the sample variances do not differ too much; thus most statisticians would use this two sample t-test.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to statistics. OpenStax CNX. Oct 09, 2007 Download for free at http://cnx.org/content/col10343/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to statistics' conversation and receive update notifications?

Ask