<< Chapter < Page | Chapter >> Page > |
Sufficient statistics arise in nearly every aspect of statistical inference. It is important to understandthem before progressing to areas such as hypothesis testing and parameter estimation.
Suppose we observe an $N$ -dimensional random vector $X$ , characterized by the density or mass function $f(, x)$ , where $$ is a $p$ -dimensional vector of parameters to be estimated. The functional form of $f(x)$ is assumed known. The parameter $$ completely determines the distribution of $X$ . Conversely, a measurement $x$ of $X$ provides information about $$ through the probability law $f(, x)$ .
Suppose $X=\left(\begin{array}{c}{X}_{1}\\ {X}_{2}\end{array}\right)$ , where $({X}_{i}, (, 1))$ are IID. Here $$ is a scalar parameter specifying the mean. The distribution of $X$ is determined by $$ through the density $$f(, x)=\frac{1}{\sqrt{2\pi}}e^{-\left(\frac{({x}_{1}-)^{2}}{2}\right)}\frac{1}{\sqrt{2\pi}}e^{-\left(\frac{({x}_{2}-)^{2}}{2}\right)}$$ On the other hand, if we observe $x=\left(\begin{array}{c}100\\ 102\end{array}\right)$ , then we may safely assume $=0$ is highly unlikely.
The $N$ -dimensional observation $X$ carries information about the $p$ -dimensional parameter vector $$ . If $p< N$ , one may ask the following question: Can we compress $x$ into a low-dimensional statistic without any loss of information?Does there exist some function $t=T(x)$ , where the dimension of $t$ is $M< N$ , such that $t$ carries all the useful information about $$ ?
If so, for the purpose of studying $$ we could discard the raw measurements $x$ and retain only the low-dimensional statistic $t$ . We call $t$ a sufficient statistic . The following definition captures this notion precisely:
1. Let $f(, x, t)$ denote the joint density or probability mass function on $(X,T(X\left)\right)$ . If $T(X)$ is a sufficient statistic for $$ , then
2. Given $t=T(x)$ , full knowledge of the measurement $x$ brings no additional information about $$ . Thus, we may discard $x$ and retain on the compressed statistic $t$ .
3. Any inference strategy based on $f(, x)$ may be replaced by a strategy based on $f(, t)$ .
( Scharf, pp.78 ) Suppose a binary information source emitsa sequence of binary (0 or 1) valued, independent variables ${x}_{1},,{x}_{N}$ . Each binary symbol may be viewed as a realization of a Bernoulli trial: $({x}_{n}, \mathrm{Bernoulli}())$ , iid. The parameter $\in \left[0 , 1\right]$ is to be estimated.
The probability mass function for the random sample $x=\left(\begin{array}{c}{x}_{1}\\ \\ {x}_{N}\end{array}\right)$ is
We will show that $k$ is a sufficient statistic for $x$ . This will entail showing that the conditional probability massfunction $f(, k, x)$ does not depend on $$ .
The distribution of the number of ones in $N$ independent Bernoulli trials is binomial: $$f(, k)=(N, k)^{k}(1-)^{(N-k)}$$ Next, consider the joint distribution of $(x,\sum {x}_{n})$ . We have $$f(, x)=f(, x, \sum {x}_{n})$$ Thus, the conditional probability may be written
In the previous example , suppose we wish to store in memory the information we possess about $$ . Compare the savings, in terms of bits, we gain by storing the sufficientstatistic $k$ instead of the full sample ${x}_{1},,{x}_{N}$ .
In the example above , we had to guess the sufficient statistic, and work out theconditional probability by hand. In general, this will be a tedious way to go about finding sufficientstatistics. Fortunately, spotting sufficient statistics can be made easier by the Fisher-Neyman Factorization Theorem .
Sufficient statistics have many uses in statistical inference problems. In hypothesis testing, the Likelihood Ratio Test can often be reduced to a sufficient statistic of the data. In parameter estimation, the Minimum Variance Unbiased Estimator of a parameter $$ can be characterized by sufficient statistics and the Rao-Blackwell Theorem .
Minimal sufficient statistics are, roughly speaking, sufficient statistics that cannot becompressed any more without losing information about the unknown parameter. Completeness is a technical characterization of sufficient statistics that allows one toprove minimality. These topics are covered in detail in this module.
Further examples of sufficient statistics may be found in the module on the Fisher-Neyman Factorization Theorem .
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?