<< Chapter < Page | Chapter >> Page > |
If very uncertain about model accuracy, assuming a form for the nominal density may be questionable orquantifying the degree of uncertainty may be unreasonable. In these cases, any formula for the underlying probability densities may be unjustified, but the model evaluationproblem remains. For example, we may want to determine a signal presence or absence in an array output (non-zero mean vs. zeromean) without much knowledge of the contaminating noise. If minimal assumptions can be made about the probability densities, non-parametric model evaluation can be used ( Gibson and Melsa ). In this theoretical framework, no formula for the conditional densities isrequired; instead, we use worst-case densities which conform to the weak problem specification. Because few assumptions about theprobability models are used, non-parametric decision rules are robust: they are insensitive to modeling assumptions because sofew are used. The "robust" test of the previous section are so-named because they explicitly encapsulate model imprecision. Ineither case, one should expect greater performance (smaller error probabilities) in non-parametric decision rules than possible froma "robust" one.
Two hypothesized models are to be tested; is intended to describe the situation "the observed data have zero mean" and the other "a non-zero mean is present."We make the usual assumption that the observed data values are statistically independent. The only assumption we will make about the probabilistic descriptions underlying these models is that the median of the observationsis zero in the first instance and non-zero in the second. The median of a random variable is the "half-way" value: the probability that the random variable is less than themedian is one-half as is the probability that it is greater. The median and mean of a random variable are not necessarily equal;for the special case of a symmetric probability density they are. In any case, the non-parametric models will be stated interms of the probability that an observation is greater than zero. The first model is equivalent to a zero-median model for the data; the second implies that the median is greater thanzero. Note that the form of the two underlying probability densities need not be the same to correspond to the two models;they can differ in more general ways than in their means.
To solve this model evaluation problem, we seek (as do all robust techniques) the worst-case density, the densitysatisfying the conditions for one model that is maximally difficult to distinguish from a given density under theother. Several interesting problems arise in this approach. First of all, we seek a non-parametric answer: thesolution must not depend on unstated parameters (we should not have to specify how large the non-zero mean might be). Secondly,the model evaluation rule must not depend on the form for the given density. These seemingly impossible properties are easilysatisfied. To find the worst-case density, first define to be the probability density of the observation assuming that is true and that the observation was non-negative. A similar definition for negative values isneeded. In terms of these quantities, the conditional density of an observation under is given by The worst-case density under would have exactly the same functional form as this one for positive and negative valueswhile having a zero median.
The likelihood ratio for a single observation would be for non-negative values and for negative values. While the likelihood ratio depends on , which is not specified in out non-parametric model, the sufficient statistic will not depend on it! To see this, note that the likelihood ratio varies only withthe sign of the observation. Hence, the optimal decision rule amounts to counting how many of the observations are positive;this count can be succinctly expressed with the unit-step function as .
To find the threshold , we can use the Central Limit Theorem to approximate theprobability distribution of the sum by a Gaussian. Under , the expected value of is and the variance is . To the degree that the Central Limit Theorem reflects the false-alarm probability (see this problem ), is approximately given by and the threshold is found to be As it makes no sense for the threshold to be greater than (how many positively values observations can there be?), the specified false-alarmprobability must satisfy . This restriction means that increasing stringent requirements on the false-alarm probability can only be met ifwe have sufficient data.
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?