<< Chapter < Page Chapter >> Page >

In statistics, hypothesis testing is some times known as decision theory or simply testing. The key result around whichall decision theory revolves is the likelihood ratio test.

The likelihood ratio test

In a binary hypothesis testing problem, four possible outcomes can result. Model 0 did in fact represent the best model for the data and the decision rule said it was (a correct decision) or saidit wasn't (an erroneous decision). The other two outcomes arise when model 1 was in fact true with either a correct or incorrect decision made. The decision process operates by segmentingthe range of observation values into two disjoint decision regions 0 and 1 . All values of r fall into either 0 or 1 . If a given r lies in 0 , for example, we will announce our decision

"model 0 was true"
; if in 1 , model 1 would be proclaimed. To derive a rational method of deciding which model best describes the observations, we needa criterion to assess the quality of the decision process. Optimizing this criterion will specify the decision regions.

The Bayes' decision criterion seeks to minimize a cost function associated with making a decision. Let C i j be the cost of mistaking model j for model i ( i j ) and C i i the presumably smaller cost of correctly choosing model i : C i j C i i , i j . Let i be the a priori probability of model i . The so-called Bayes' cost C is the average cost of making a decision.

C i j i j C i j j say i when H j true i j i j C i j j H j true say i
The Bayes' cost can be expressed as
C i j i j C i j j 0 true r i i j i j C i j j r i p r H j r r 0 C 0 0 0 p r 0 r C 0 1 1 p r 1 r r 1 C 1 0 0 p r 0 r C 1 1 1 p r 1 r
p r i r is the conditional probability density function of the observed data r given that model i was true. To minimize this expression with respect to the decision regions 0 and 1 , ponder which integral would yield the smallest value if its integration domain included a specificobservation vector. This selection process defines the decision regions; for example, we choose 0 for those values of r which yield a smaller value for the first integral. 0 C 0 0 p r 0 r 1 C 0 1 p r 1 r 0 C 1 0 p r 0 r 1 C 1 1 p r 1 r We choose 1 when the inequality is reversed. This expression is easily manipulated to obtain the decision rule known as the likelihood ratio test .
p r 1 r p r 0 r 0 1 0 C 1 0 C 0 0 1 C 0 1 C 1 1
The comparison relation means selecting model 1 if the left-hand ratio exceeds the value on the right; otherwise, 0 is selected. Thus, the likelihood ratio p r 1 r p r 0 r symbolically represented by r , is computed from the observed value of r and then compared with a threshold equaling 0 C 1 0 C 0 0 1 C 0 1 C 1 1 . Thus, when two models are hypothesized, the likelihood ratio test can be succinctly expressed as thecomparison of the likelihood ratio with a threshold.
r 0 1

The data processing operations are captured entirely by the likelihood ratio p r 1 r p r 0 r . Furthermore, note that only the value of the likelihood ratio relative to the threshold matters; to simplify the computation of thelikelihood ratio, we can perform any positively monotonic operations simultaneously on the likelihood ratio and the threshold without affecting thecomparison. We can multiply the ratio by a positive constant, add any constant, or apply a monotonically increasing functionwhich simplifies the expressions. We single one such function, the logarithm, because it simplifies likelihoodratios that commonly occur in signal processing applications. Known as the log-likelihood, we explicitlyexpress the likelihood ratio test with it as

r 0 1
Useful simplifying transformations are problem-dependent; by laying bare that aspect of the observations essential to themodel testing problem, we reveal the sufficient statistic r : the scalar quantity which best summarizes the data ( Lehmann, pp. 18-22 ). The likelihood ratio test is best expressed in terms of thesufficient statistic.
r 0 1
We will denote the threshold value by when the sufficient statistic is used or by when the likelihood ratio appears prior to its reduction to a sufficient statistic.

As we shall see, if we use a different criterion other than the Bayes' criterion, the decision rule often involves thelikelihood ratio. The likelihood ratio is comprised of the quantities p r i r , termed the likelihood function , which is also important in estimation theory. It is thisconditional density that portrays the probabilistic model describing data generation. The likelihood functioncompletely characterizes the kind of "world" assumed by each model; for each model, we must specify the likelihood functionso that we can solve the hypothesis testing problem.

A complication, which arises in some cases, is that the sufficient statistic may not be monotonic. If monotonic, thedecision regions 0 and 1 are simply connected (all portions of a region can be reached without crossing into the other region). If not,the regions are not simply connected and decision region islands are created (see this problem ). Such regions usually complicate calculations of decision performance. Monotonic ornot, the decision rule proceeds as described: the sufficient statistic is computed for each observation vector and comparedto a threshold.

An instructor in a course in detection theory wants to determine if a particular student studied for his last test.The observed quantity is the student's grade, which we denote by r . Failure may not indicate studiousness: conscientious students may fail the test. Define the modelsas

  • 0 :did not study
  • 1 :did study
The conditional densities of the grade are shown in .
Conditional densities for the grade distributions assuming that a student did not study( 0 ) or did ( 1 ) are shown in the top row. The lower portion depicts the likelihood ratio formed from these densities.
Based on knowledge of student behavior, the instructor assigns a priori probabilities of 0 1 4 and 1 3 4 . The costs C i j are chosen to reflect the instructor's sensitivity to student feelings: C 0 1 1 C 1 0 (an erroneous decision either way is given the same cost) and C 0 0 0 C 1 1 . The likelihood ratio is plotted in and the threshold value , which is computed from the a priori probabilities and the costs to be 1 3 , is indicated. The calculations of this comparison can be simplified in an obvious way. r 50 0 1 1 3 or r 0 1 50 3 16.7 The multiplication by the factor of 50 is a simple illustration of the reduction of the likelihood ratio to asufficient statistic. Based on the assigned costs and a priori probabilities, the optimum decision rule says the instructor must assume that thestudent did not study if the student's grade is less than 16.7; if greater, the student is assumed to have studieddespite receiving an abysmally low grade such as 20. Note that as the densities given by each model overlap entirely:the possibility of making the wrong interpretation always haunts the instructor. However, no other procedure will be better!

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask