<< Chapter < Page | Chapter >> Page > |
In this section , we questioned the existence of an efficient estimator forsignal parameters. We found in the succeeding example that an unbiased efficient estimator exists for the signalamplitude. Can a nonlinearly represented parameter, such as time delay, have an efficient estimator?
Simplify the condition for the existence of an efficient estimator by assuming it to be unbiased. Note carefullythe dimensions of the matrices involved.
Show that the only solution in this case occurs when the signal depends "linearly" on the parameter vector.
In Poission problems, the number of events $n$ occurring in the interval $\left[0 , T\right)$ is governed by the probability distribution (see The Poission Process ) $$(n)=\frac{(T)^{n}}{n!}e^{-(T)}$$ where $$ is the average rate at which events occur.
What is the maximum likelihood estimate of average rate?
Does this estimate satisfy the Cramr-Rao bound?
In the "classic" radar problem, not only is the time of
arrival of the radar pulse unknown but also the amplitude.In this problem, we seek methods of simultaneously
estimating these parameters. The received signal
$r(l)$ is of the form
$$r(l)={}_{1}s(l-{}_{2})+n(l)$$ where
${}_{1}$ is Gaussian with zero mean and variance
${}_{1}^{2}$ and
${}_{2}$ is uniformly distributed over the observation interval.
Find the receiver that computes the maximum
We state without derivation the Cramr-Rao bound for estimates of signal delay (see this equation ).
The parameter $$ is the delay of the signal $s()$ observed in additive, white Gaussian noise: $r(l)=s(l-)+n(l)$ , $l\in \{0, , L-1\}$ . Derive the Cramr-Rao bound for this problem.
In Time-delay Estimation , this bound is claimed to be given by $\frac{{}_{n}^{2}}{E^{2}}$ , where $^{2}$ is the mean-squared bandwidth. Derive this result from your general formula. Does the bound make sense for allvalues of signal-to-noise ratio $\frac{E}{{}_{n}^{2}}$ ?
Using optimal detection theory, derive the expression (see Time-Delay Estimation ) for the probability of error incurred when trying todistinguish between a delay of $$ and a delay of $+$ . Consistent with the problem pposed for the Cramr-Rao bound, assume the delayed signals are observed in additive, white Gaussian noise.
In formulating detection problems, the signal as well as the noise are sometimes modeled as Gaussian processes. Let'sexplore what differences arise in the Cramr-Rao bound derived when the signal is deterministic. Assume thatthe signal contains unknown parameters $$ , that it is statistically independent of the noise, and that the noise covariancematrix is known.
What forms do the conditional densities of the observations take under the two assumptions? What are thetwo covariance matrices?
Assuming the stochastic signal model, show that each element of the Fisher information matrix has the form $$F_{i, j}=\frac{1}{2}\mathrm{tr}(K^{(-1)}\frac{\partial^{1}K}{\partial {}_{i}}K^{(-1)}\frac{\partial^{1}K}{\partial {}_{j}})$$ where $K$ denotes the covariance matrix of the observations. Make this expression more complex by assuming the noisecomplement has no unknown parameters.
Compare the stochastic and deterministic bounds, the latter is given by this equation , when the unknown signal parameters are amplitude and delay. Assume thenoise covariance matrix equals ${}_{n}^{2}I$ . Do these bounds have similar dependence on signal-to-noise ratio?
The histogram probability density estimator is a special case of a more general class of estimators known as kernel estimators . $$(p(r, x))=\frac{1}{L}\sum_{l=0}^{L-1} k(x-r(l))$$ Here, the kernel $k()$ is usually taken to be a density itself.
What is the kernel for the histogram estimator.
Interpret the kernel estimator in signal processing terminology. Predict what the most time consumingcomputation of this estimate might be. Why?
Show that the sample average equals the expected value of a random variable having the density $(p(r, x))$ regardless of the choice of kernel.
Random variables can be generated quite easily if the probability distribution function is "nice." Let $X$ be a random variable having distribution function $P(X, )$ .
Show that the random variable $U=P(X, X)$ is uniformly distributed over $\left(0 , 1\right)$ .
Based on this result, how would you generate a random variable having a specific density with a uniform randomvariable generator, which is commonly supplied with most computer and calculator systems?
How would you generate random variables having the hyperbolic secant density $p(X, x)=\frac{1}{2}\mathrm{sech\,}\left(\frac{\pi x}{2}\right)$ ?
Why is the Gaussian not in the class of "nice" probability distribution functions? Despite this fact, the Gaussianand other similarly unfriendly random variables can be generated using tabulated rather than analytic forms forthe distribution function.
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?