<< Chapter < Page Chapter >> Page >

When this situation occurs - the sufficient statistic and the false-alarm probability can be computed without needing the parameter in question, we haveestablished what is known as a uniformly most powerful test (or UMP test) ( Cramr; p.529-531 ), ( van Trees; p.89ff ). If an UMP test does not exist, which can only be demonstrated by explicitly finding the sufficient statisticand evaluating its probability distribution, then the composite hypothesis testing problem cannot be solved without some valuefor the parameter being used.

This seemingly impossible situation - we need the value of the parameter that is assumed unknown - can be approached by notingthat some data is available for "guessing" the value of the parameter. If a reasonable guess could be obtained, it couldthen be used in our model evaluation procedures developed in this chapter. The data available for estimating unknown parameters are precisely the data used in the decisionrule . Procedures intended to yield "good" guesses of the value of a parameter are said to be parameter estimates . Estimation procedures are the topic of the next chapter; there we will explore a variety of estimationtechniques and develop measure of estimate quality. For the moment, these issues are secondary; even if we knew the size ofthe estimation error, for example, the more pertinent issue is how the imprecise parameter value affects the performanceprobabilities. We can compute these probabilities without explicitly determining the estimate's error characteristics.

One parameter estimation procedure that fits nicely into the composite hypothesis testing problem is the maximum likelihood estimate .

The maximum likelihood estimation procedure and its characteristics arefully described in this section .
Letting r denote the vector of observables and a vector of parameters, the maximum likelihood estimate of , ML , is that value of that maximizes the conditional density p r r of the observations given the parameter values. To use ML in our decision rule, we estimate the parameter vector separately for each model, use the estimated value in the conditional density of the observations,and compute the likelihood ratio. This procedure is termed the generalized likelihood ratio test for the unknown parameter problem in hypothesis testing ( Lehmann; p.16 ), ( van Trees; p.92ff ).
r p r 1 r p r 0 r
Note that we do not find that value of the parameter that (necessarily) maximizes the likelihood ratio.Rather, we estimate the parameter value most consistent with the observed data in the context of each assumed model (hypothesis)of data generation. In this way, the estimate conforms with each potential model rather than being determined by someamalgam of supposedly mutually exclusive models.

Returning to our Gaussian example, assume that the variance 2 is known but that the mean under 1 is unknown. 0 : r 0 2 I 1 : r m 2 I m m m , m ? The unknown quantity occurs only in the exponent of the conditional density under 1 ; to maximize this density, we need only to maximize theexponent. Thus, we consider the derivative of the exponent with respect to m . m m ML m 1 2 2 l 0 L 1 r l m 2 0 l 0 L 1 r l m ML 0 The solution of this equation is the average value of the observations m ML 1 L l 0 L 1 r l To derive the decision rule, we substitute this estimate in the conditional density for 1 . The critical term, the exponent of this density, ismanipulated to obtain 1 2 2 l 0 L 1 r l 1 L k 0 L 1 r k 2 1 2 2 l 0 L 1 r l 2 1 L l 0 L 1 r l 2 Noting that the first term in this exponent is identical to the exponent of the denominator in the likelihood ratio, thegeneralized likelihood ratio becomes r 1 2 L 2 l 0 L 1 r l 2 The sufficient statistic thus becomes the square (or equivalently the magnitude) of the summed observations.Compare this result with that obtained in . There, an UMP test existed if we knew the sign of m and the sufficient statistic was the sum of the observations. Here, where we employed the generalized likelihood ratio test,we made no such assumptions about m ; this generality accounts for the difference in sufficientstatistic. Which test do you think would lead to a greater detection probability for a given false-alarm probability?

Once the generalized likelihood ratio is determined, we need to determine the threshold. If the a priori probabilities 0 and 1 are known, the evaluation of the threshold proceeds in the usual way. If they are not known, all of the conditional densitiesmust not depend on the unknown parameters lest the performance probabilities also depend upon them. In most cases, theoriginal model evaluation problem is posed in such a way thatone of the models does not depend on the unknown parameter; a criterion on the performance probability related to that modelcan then be established via the Neyman-Pearson procedure. If not the case, the threshold cannot be computed and the thresholdmust be set experimentally: we force one of the models to be true and modify the threshold on the sufficient statistic untilthe desired level of performance is reached. Despite this non-mathematical approach, the overall performance of the modelevaluation procedure will be optimum because of the results surrounding the Neyman-Pearson criterion.

The two-model testing problem can be abstractly described as a communication channel where the inputs are the models andthe outputs are the decisions. The transition probabilities are related to the false-alarm( P F ) and detection ( P D ) probabilities.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask