<< Chapter < Page Chapter >> Page >

The size of the errors encountered in the time-delay estimation problem can be more accurately assessed by a bounding techniquetailored to the problem: the Ziv-Zakai bound ( Wiess and Weinstein , Ziv and Zakai ). The derivation of this bound relies on results from detection theory ( Chazan, Zakai, and Ziv ). This result is an example of detection and estimation theory complementing each other to advantage. Consider the detection problem in which we must distinguish the signals s l τ and s l τ Δ while observing them in the presence of white noise that is not necessarily Gaussian. Let hypothesis 0 represent the case in which the delay, denoted by our parameter symbol θ , is τ and 1 the case in which θ τ Δ . The suboptimum test statistic consists of estimating the delay, then determining the closest a priori delay to the estimate. θ 0 1 τ Δ 2 By using this ad hoc hypothesis test as an essential part of the derivation, the bound can apply to many situations.Furthermore, by not restricting the type of parameter estimate, the bound applies to any estimator. The probability of errorfor the optimum hypothesis test (derived from the likelihood ratio) is denoted by P e τ Δ . Assuming equally likely hypotheses, the probability of error resulting from the ad hoc test must be greater than that of the optimum. P e τ Δ 1 2 0 ε Δ 2 1 2 1 ε Δ 2 Here, ε denotes the estimation error appropriate to the hypothesis. ε θ τ under 0 θ τ Δ under 1 The delay is assumed to range uniformly between 0 and L . Combining this restriction to the hypothesized delays yields bounds on both τ and Δ : 0 τ L Δ and 0 Δ L . Simple manipulations show that the integral of this inequality with respect to τ over the possible range of delays is given by Here again, the issue of the discrete nature of the delay becomes a consideration; this step in the derivationimplicitly assumes that the delay is continuous valued. This approximation can be greeted more readily as it involvesintegration rather than differentiation (as in the Cramér-Rao bound). τ 0 L Δ P e τ Δ 1 2 τ 0 L 0 ε Δ 2 Note that if we define L 2 P Δ 2 to be the right side of this equation so that P Δ 2 1 L τ 0 L 0 ε Δ 2 P · is the complementary distribution function The complementary distribution function of a probability distribution function P x is defined to be P x 1 P x , the probability that a random variable exceeds x . of the magnitude of the average estimation error. Multiplying P Δ 2 by Δ and integrating, the result is Δ 0 L Δ P Δ 2 -2 x 0 L 2 x 2 x P The reason for these rather obscure manipulations is now revealed: Because P · is related to the probability distribution function of the absolute error, the right side of this equation is twice themean-squared error ε 2 . The general Ziv-Zakai bound for the mean-squared estimation error of signal delay is thus expressed as ε 2 1 L Δ 0 L Δ τ 0 L Δ P e τ Δ In many cases, the optimum probability of error P e τ Δ does not depend on τ , the time origin of the observations. This lack of dependence is equivalent to ignoring edge effects and simplifiescalculation of the bound. Thus, the Ziv-Zakai bound for time-delay estimation relates the mean-squared estimation errorfor delay to the probability of error incurred by the optimal detector that is deciding whether a nonzero delay is present ornot.

ε 2 1 L Δ 0 L Δ L Δ P e Δ L 2 6 P e L Δ 0 L Δ 2 2 Δ 3 3 L Δ P e
To apply this bound to time-delay estimates (unbiased or not), the optimum probability of error for the type of noise and therelative delay between the two signals must be determined. Substituting this expression into either integral yields theZiv-Zakai bound.

The general behavior of this bound at parameter extremes can be evaluated in some cases. Note that the Cramér-Rao boundin this problem approaches infinity as either the noise variance grows or the observation interval shrinks to 0 (either forcesthe signal-to-noise ratio to approach 0). This result is unrealistic as the actual delay is bounded, lying between 0 and L . In this very noisy situation, one should ignore the observations and "guess" any reasonable value for the delay; the estimation error is smaller. Theprobability of error approaches 1 2 in this situation no matter what the delay Δ may be. Considering the simplified form of the Ziv-Zakai bound, the integral in the second form is 0 in thisextreme case. ε 2 L 2 12 The Ziv-Zakai bound is exactly the variance of a random variable uniformly distributed over 0 L 1 . The Ziv-Zakai bound thus predicts the size of mean-squared errors more accurately than does theCramér-Rao bound.

Let the noise be Gaussian of variance σ n 2 and the signal have energy E . The probability of error resulting from the likelihood ratio test is given by P e Δ Q E 2 σ n 2 1 ρ Δ The quantity ρ Δ is the normalized autocorrelation function of the signal evaluated at the delay Δ . ρ Δ 1 E l s l s l Δ Evaluation of the Ziv-Zakai bound for a general signal is very difficult in this Gaussian noise case.Fortunately, the normalized autocorrelation function can be bounded by a relatively simple expression to yield a moremanageable expression. The key quantity 1 ρ Δ in the probability of error expression can be rewritten using Parseval's Theorem. 1 ρ Δ 1 2 E ω 0 2 S ω 2 1 ω Δ Using the inequality 1 x x 2 , 1 ρ Δ is bounded from above by Δ 2 β 2 2 2 , where β is the root-mean-squared ( RMS ) signal bandwidth.

β 2 ω ω 2 S ω 2 ω S ω 2
Because Q · is a decreasing function, we have P e Δ Q μ Δ Δ * , where μ is a combination of all of the constants involved in the argument of Q · : μ E β 2 4 σ n 2 . This quantity varies with the product of the signal-to-noise ratio E σ n 2 and the squared RMS bandwidth β 2 . The parameter Δ * 2 β is known as the critical delay and is twice the reciprocal RMS bandwidth. We can use this lowerbound for the probability of error in the Ziv-Zakai bound to produce a lower bound on the mean-squared estimation error.The integral in the first form of the bound yields the complicated, but computable result ε 2 L 2 6 Q μ L Δ * 1 4 μ 2 P χ 3 2 μ 2 L 2 Δ * 2 2 3 2 L μ 3 1 1 μ 2 2 L 2 Δ * 2 μ 2 L 2 Δ * 2 2 The quantity P χ 3 2 · is the probability distribution function of a χ 2 random variable having three degrees of freedom. This distribution function has the "closed-form" expression P χ 3 2 x 1 Q x x 2 x 2 . Thus, the threshold effects in this expression for the mean-squared estimation error depend on therelation between the critical delay and the signal duration. In most cases, the minimum equals the critical delay Δ * , with the opposite choice possible for very low bandwidth signals.

The Ziv-Zakai bound and the Cramér-Rao bound for the estimation of the time delay of a signal observed inthe presence of Gaussian noise is shown as a function of the signal-to-noise ratio. For this plot, L 20 and β 2 0.2 . The Ziv-Zakai bound is much larger than the Cramér-Rao bound for signal-to-noise ratios less than13 dB; the Ziv-Zakai bound can be as much as 30 times larger.
The Ziv-Zakai bound and the Cramér-Rao bound for the time-delay estimation problem are shown in [link] . Note how the Ziv-Zakai bound matches the Cramér-Rao bound only for large signal-to-noise ratios,where they both equal 1 4 μ 2 σ n 2 E β 2 . For smaller values, the former bound is much larger and provides a better indication of the size of theestimation errors. These errors are because of the "cycle skipping" phenomenon described earlier. The Ziv-Zakai bounddescribes them well, whereas the Cramér-Rao bound ignores them.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask