<< Chapter < Page Chapter >> Page >

The neyman-pearson lemma: general case

In our initial statement of the Neyman-Pearson Lemma, we assumed that for all , the set x x x had probability zero under 0 . This eliminated many important problems from consideration, including tests of discrete data.In this section we remove this restriction.

It is helpful to introduce a more general way of writing decision rules. Let be a function of the data x with x 0 1 . defines the decision rule "declare 1 with probability x ." In other words, upon observing x , we flip a " x coin." If it turns up heads, we declare 1 ; otherwise we declare 0 . Thus far, we have only considered rules with x 0 1

Neyman-pearson lemma

Consider the hypothesis testing problem 0 : x f 0 x 1 : x f 1 x where f 0 and f 1 are both pdfs or both pmfs. Let 0 1 be the size (false-alarm probability) constraint. The decision rule x 1 x x 0 x is the most powerful test of size , where and are uniquely determined by requiring P F . If 0 , we take , 0 . This test is unique up to sets of probability zero under 0 and 1 .

When x 0 for certain , we choose and as follows: Write P F x x Choose such that x x Then choose such that x x

Repetition code

Suppose we have a friend who is trying to transmit a bit (0 or 1) to us over a noisy channel. Thechannel causes an error in the transmission (that is, the bit is flipped) with probability p , where 0 p 1 2 , and p is known. In order to increase the chance of a successful transmission,our friend sends the same bit N times. Assume the N transmissions are statistically independent. Under these assumptions, the bitsyou receive are Bernoulli random variables: x n Bernoulli . We are faced with the following hypothesis test: 0 : p (0 sent) 1 : 1 p (1 sent) We decide to decode the received sequence x x 1 x N by designing a Neyman-Pearson rule. The likelihood ratio is

x n 1 N 1 p x n p 1 x n n 1 N p x n 1 p 1 x n 1 p k p N k p k 1 p N k 1 p p 2 k N
where k n 1 N x n is the number of 1s received.
k is a sufficient statistic for .
The LRT is 1 p p 2 k N 0 1 Taking the natural logarithm of both sides and rearranging, we have k 0 1 N 2 1 2 1 p p The false alarm probability is
P F k k k 1 N N k p k 1 p N k N p 1 p N
and are chosen so that P F , as described above.

The corresponding detection probability is

P D k k k 1 N N k 1 p k p N k N 1 p p N

Problems

Design a hypothesis testing problem involving continous random variables such that x 0 for certain values of . Write down the false-alarm probability as a function of thethreshold. Make as general a statement as possible about when the technical condition is satisfied.

Consider the scalar hypothesis testing problem 0 : x f 0 x 1 : x f 1 x where f i x 1 1 x i 2 , i 0 1

Write down the likelihood ratio test.

Determine the decision regions as a function of 1 for all 0 . Draw a representative of each. What are the "critical" values of ?

There are five distinct cases.

Compute the size and power ( P F and P D ) in terms of the threshold 1 and plot the ROC.

x 1 1 x 2 x

Suppose we decide to use a simple threshold test x 0 1 instead of the Neyman-Pearson rule. Does our performance 0 suffer much? Plot the ROC for this decision rule on the same graph as for the previous ROC.

Suppose we observe N independent realizations of a Poisson random variable k with intensity parameter : f k k k We must decide which of two intensities is in effect: 0 : 0 1 : 1 where 0 1 .

Write down the likelihood ratio test.

Simplify the LRT to a test statistic involving only a sufficient statistic. Apply a monotonicallyincreasing transformation to simplify further.

Determine the distribution of the sufficient statistic under both hypotheses.

Use the characteristic function to show that a sum of IID Poissonvariates is again Poisson distributed.

Derive an expression for the probability of error.

Assuming the two hypotheses are equally likely, and 0 5 and 1 6 , what is the minimum number N of observations needed to attain a false-alarm probability no greater than0.01?

If you have numerical trouble, try rewriting the log-factorial so as to avoidevaluating the factorial of large integers.

In , suppose p 0.1 . What is the smallest value of N needed to ensure P F 0.01 ? What is P D in this case?

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask