<< Chapter < Page Chapter >> Page >

By far the easiest detection problem to solve occurs when the noise vector consists of statistically independent, identicallydistributed, Gaussian random variables. In this book, a white sequence consists of statistically independent random variables. The white sequence's mean isusually taken to be zero

The zero-mean assumption is realistic for the detection problem. If the mean were non-zero, simplysubtracting it from the observed sequence results in a zero-mean noise component.
and each component's variance is 2 . The equal-variance assumption implies the noise characteristics are unchanging throughout the entire set ofobservations. The probability density of the zero-mean noise vector evaluated at r s i equals that of Gaussian random vector having independent components ( K 2 I ) with mean s i . p n r s i 1 2 2 L 2 1 2 2 r s i r s i The resulting detection problem is similar to the Gaussianexample examined so frequently in the hypothesis testing sections, with the distinction here being a non-zero mean underboth models. The logarithm of the likelihood ratio becomes r s 0 r s 0 r s 1 r s 1 1 0 2 2 and the usual simplifications yield in r s 1 s 1 s 1 2 r s 0 s 0 s 0 2 1 0 2 The quantities in parentheses express the signal processing operations for each model. If more than two signals were assumedpossible, quantities such as these would need to be computed for each signal and the largest selected. This decision rule isoptimum for the additive, white Gaussian noise problem.

Each term in the computations for the optimum detector has a signal processing interpretation. When expanded, the term s i s i equals l 0 L 1 s i l 2 , which is the signal energy E i . The remaining term - r s i - is the only one involving the observations and hence constitutes the sufficient statistic i r for the additive white Gaussian noise detection problem. i r r s i An abstract, but physically relevant, interpretation of thisimportant quantity comes from the theory of linear vector spaces. There, the quantity r s i would be termed the dot product between r and s i or the projection of r onto s i . By employing the Schwarz inequality, the largest value of this quantity occurs when these vectors are proportional to eachother. Thus, a dot product computation measures how much alike two vectors are: they are completely alike when they areparallel (proportional) and completely dissimilar when orthogonal (the dot product is zero). More precisely, the dotproduct removes those components from the observations which are orthogonal to the signal. The dot product thereby generalizesthe familiar notion of filtering a signal contaminated by broadband noise. In filtering, the signal-to-noise ratio of abandlimited signal can be drastically improved by lowpass filtering; the output would consist only of the signal and"in-band" noise. The dot product serves a similar role, ideally removing those "out-of-band" components (the orthogonal ones)and retaining the "in-band" ones (those parallel to the signal).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask