<< Chapter < Page Chapter >> Page >

In hypothesis testing , as in all other areas of statistical inference, there are two major schools of thought on designing good tests:Bayesian and frequentist (or classical). Consider the simple binary hypothesis testing problem 0 : x f 0 x 1 : x f 1 x In the Bayesian setup, the prior probability i i of each hypothesis occurring is assumed known. This approach to hypothesis testing is represented by the minimum Bayes risk criterion and the minimum probability of error criterion .

In some applications, however, it may not be reasonable to assign an a priori probability to a hypothesis. For example, what is the a priori probability of a supernova occurring in any particular region of the sky? What is the prior probability of being attacked by aballistic missile? In such cases we need a decision rule that does not depend on making assumptions about the a priori probability of each hypothesis. Here the Neyman-Pearson criterion offers an alternative to the Bayesianframework.

The Neyman-Pearson criterion is stated in terms of certain probabilities associated with a particular hypothesis test. The relevant quantities are summarized in . Depending on the setting, different terminology is used.

Statistics Signal Processing
Probability Name Notation Name Notation
P 0 declare 1 size false-alarm probability P F
P 1 declare 1 power detection probability P D

Here P i declare j dentoes the probability that we declare hypothesis j to be in effect when i is actually in effect. The probabilities P 0 declare 0 and P 1 declare 0 (sometimes called the miss probability), are equal to 1 P F and 1 P D , respectively. Thus, P F and P D represent the two degrees of freedom in a binary hypothesis test. Note that P F and P D do not involve a priori probabilities of the hypotheses.

These two probabilities are related to each other through the decision regions . If R 1 is the decision region for 1 , we have P F x R 1 f 0 x P D x R 1 f 1 x The densities f i x are nonnegative, so as R 1 shrinks, both probabilities tend to zero. As R 1 expands, both tend to one. The ideal case, where P D 1 and P F 0 , cannot occur unless the distributions do not overlap ( i.e. , x f 0 x f 1 x 0 ). Therefore, in order to increase P D , we must also increase P F . This represents the fundamental tradeoff in hypothesis testing and detection theory.

Consider the simple binary hypothesis test of a scalar measurement x : 0 : x 0 1 1 : x 1 1 Suppose we use a threshold test x 0 1 where is a free parameter. Then the false alarm and detection probabilities are P F t 1 2 t 2 2 Q P D t 1 2 t 1 2 2 Q 1 where Q denotes the Q-function . These quantities are depicted in .

False alarm and detection values for a certain threshold.
Since the Q -function is monotonicaly decreasing, it is evident that both P D and P F decay to zero as increases. There is also an explicit relationship P D Q Q P F 1 A common means of displaying this relationship is with a receiver operating characteristic (ROC) curve, which is nothing more than a plot of P D versus P F ( ).
ROC curve for this example.

The neyman-pearson lemma: a first look

The Neyman-Pearson criterion says that we should construct our decision rule to have maximum probability ofdetection while not allowing the probability of false alarm to exceed a certain value . In other words, the optimal detector according to theNeyman-Pearson criterion is the solution to the following constrainted optimization problem:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask