<< Chapter < Page Chapter >> Page >

One extension of parametric estimation theory necessary for its application to array processing is the estimation of signalparameters. We assume that we observe a signal s l , whose characteristics are known save a few parameters , in the presence of noise. Signal parameters, such as amplitude, time origin, and frequencyif the signal is sinusoidal, must be determined in some way. In many cases of interest, we would find it difficult to justify aparticular form for the unknown parameters' a priori density. Because of such uncertainties, the minimum mean-squared error and maximum a posteriori estimators cannot be used in many cases. The minimum mean-squared error linear estimator does not require this density, but it is most fruitfully used when the unknownparameter appears in the problem in a linear fashion (such as signal amplitude as we shall see).

Linear minimum mean-squared error estimator

The only parameter that is linearly related to a signal is the amplitude. Consider, therefore, the problem where theobservations at an array's output are modeled as

l l 0 L 1 r l s l n l
The signal waveform s l is known and its energy normalized to be unity ( l s l 2 1 ). The linear estimate of the signal's amplitude is assumed to be of the form l h l r l , where h l minimizes the mean-squared error. To use the Orthogonality Principle expressed by this equation , an inner product must be defined for scalars. Little choice avails itself butmultiplication as the inner product of two scalars. The Orthogonality Principle states that the estimation error mustbe orthogonal to all linear transformations defining the kind of estimator being sought. h l 0 L 1 h LIN l r l k 0 L 1 h k r k 0 Manipulating this equation to make the universality constraint more transparent results in h k 0 L 1 h k l 0 L 1 h LIN l r l r k 0 Written in this way, the expected value must be 0 for each value of k to satisfy the constraint. Thus, the quantity h LIN of the estimator of the signal's amplitude must satisfy k l 0 L 1 h LIN l r l r k r k Assuming that the signal's amplitude has zero mean and is statistically independent of the zero-mean noise, the expectedvalues in this equation are given by r l r k 2 s l s k K n k l r k 2 s k where K n k l is the covariance function of the noise. The equation that must be solved for the unit-sample response h LIN of the optimal linear MMSE estimator of signal amplitude becomes
k l 0 L 1 h LIN l K n k l 2 s k 1 l 0 L 1 h LIN l s l
This equation is easily solved once phrased in matrix notation. Letting K n denote the covariance matrix of the noise, s the signal vector, and h LIN the vector of coefficients, this equation becomes K n h LIN 2 1 s h LIN s The matched filter for colored-noise problems consisted of the dot product between the vector of observations and K n s (see the detector result ). Assume that the solution to the linear estimation problem is proportional to the detectiontheoretical one: h LIN c K n s , where c is a scalar constant. This proposed solution satisfies the equation; the MMSE estimate ofsignal amplitude corresponds to applying a matched filter to the observations with
h LIN 2 1 2 s K n s K n s
The mean-squared estimation error of signal amplitude is given by 2 2 l 0 L 1 h LIN l r l Substituting the vector expression for h LIN yields the result that the mean-squared estimation error equals the proportionality constant c defined earlier. 2 2 1 2 s K n s

Thus, the linear filter that produces the optimal estimate of signal amplitude is equivalent to the matched filter used todetect the signal's presence. We have found this situation to occur when estimates of unknown parameters are needed to solvethe detection problem (see Detection in the Presence of Uncertainties ). If we had not assumed the noise to be Gaussian, however, thisdetection-theoretic result would be different, but the estimator would be unchanged. To repeat, this invarianceoccurs because the linear MMSE estimator requires no assumptions on the noise's amplitude characteristics.

Let the noise be white so that its covariance matrix is proportional to the identity matrix ( K n n 2 I ). The weighting factor in the minimum mean-squared error linear estimator is proportional to thesignal waveform. h LIN l 2 n 2 2 s l LIN 2 n 2 2 l 0 L 1 s l r l This proportionality constant depends only on the relative variances of the noise and the parameter. If the noise variance can be considered to be much smaller than the a priori variance of the amplitude, then this constant does not depend on these variances and equals unity. Otherwise, thevariances must be known.

We find the mean-squared estimation error to be 2 2 1 2 n 2 This error is significantly reduced from its nominal value 2 only when the variance of the noise is small compared with the a priori variance of the amplitude. Otherwise, this admittedly optimum amplitude estimateperforms poorly, and we might as well as have ignored the data and "guessed" that the amplitude was zero

In other words, the problem is difficult in this case.
.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Estimation theory. OpenStax CNX. May 14, 2006 Download for free at http://cnx.org/content/col10352/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Estimation theory' conversation and receive update notifications?

Ask