<< Chapter < Page Chapter >> Page >

Proof. Part (1):

H ( x n | x 1 , . . . , x n - 1 ) H ( x n | x 2 , . . . , x n - 1 ) = H ( x n - 1 | x 1 , . . . , x n - 2 ) . . . H ( x 2 | x 1 ) H ( x 1 ) .

Part (2):

H n ( x ) = 1 n [ H ( x 1 ) + H ( x 2 | x 1 ) + . . . + H ( x n | x 1 , . . . , x n - 1 ) ] 1 n [ H ( x n | x 1 , . . . , x n - 1 ) + . . . + H ( x n | x 1 , . . . , x n - 1 ) ] = H ( x n | x 1 , . . . , x n - 1 ) .

Part (3): This comes from the fist equality in the proof of (2), because we have the average of a monotonely non-increasing sequence.

Part (4): Both sequences are monotone non-increasing (parts (1) and (3)) and bounded below (by zero). Therefore, they both have a limit. Denote H ¯ ( x ) = lim n H n ( x ) and H ˜ ( x ) = lim n H ( x n | x 1 , . . . , x n - 1 ) .

Owing to part(2), H ¯ H ˜ . Therefore, it suffices to prove H ˜ H ¯ .

H n + m ( x ) = 1 n + m H ( x 1 n - 1 ) + i = n n + m H ( x i | x 1 , . . . , x i - 1 ) H ( x 1 n - 1 ) n + m + m + 1 n + m H ( x i | x 1 , . . . , x i - 1 ) .

Now fix n and take the limit for large m . The inequality H ¯ H ˜ appears, which proves that both limits are equal.

Coding theorem : Theorem 3 yields for fixed to variable length coding that for a stationary source, there exists a lossless code such that the compressionrate ρ n obeys,

ρ n = E [ l ( x 1 , . . . , x n ) ] n H n ( x ) + 1 n .

This can be proved, for example, by choosing l ( x 1 , . . . , x n ) = - log P ( x 1 , . . . , x n ) , which is a Shannon code.As n is increased, the compression rate ρ n converges to the entropy rate.

We also have a converse theorem for lossless coding of stationary sources. That is, ρ n H n ( x ) H ¯ .

Stationary ergodic sources

Consider the sequence x = ( . . . , x - 1 , x 0 , x 1 , . . . ) . Let x ' = S x denote a step n Z , x n ' = x n + 1 , where S x i takes i steps. Let f k ( x ) be a function that operates on coordinates ( x 0 , . . . , x k - 1 ) . An ergodic source has the property that empirical averages converge to statistical averages,

1 n i = 0 n - 1 f k ( S x i ) a . s . , n E f k ( x ) .

In block codes we want

1 n N i = 0 n - 1 l ( x i N + 1 , . . . , x ( i + 1 ) N ) = a.s. H .

We will be content with convergence in probability, and a.s. convergence is better.

Theorem 4 Let X be a stationary ergodic source with H 1 ( x ) < , then for every ϵ > 0 , δ > 0 , there exists n 0 ( δ , ϵ ) such that n n 0 ( δ , ϵ ) ,

Pr { | 1 n I ( x 1 , . . . , x n ) - H ¯ | } ϵ ,

where I ( x 1 , . . . , x n ) = - log ( Pr ( x 1 , . . . , x n ) ) .

The proof of this result is quite lengthy. We discussed it in detail, but skip it here.

Theorem 4 is called the ergodic theorem of information theory or the ergodic theorem of entropy. Shannon (48') proved convergence in probability for stationary ergodic Markov sources.McMillan (53') proved L 1 convergence for stationary ergodic sources. Brieman (57'/60') proved convergence with probability 1 for stationary ergodic sources.

Parametric models of information sources

In this section, we will discuss several parametric models and see what their entropy rate is.

Memoryless sources : We have seen for memoryless sources,

p ( x ) = i = 1 n p ( x i ) ,

where there are r - 1 parameters in total,

θ = { p ( a ) , a = 1 , 2 , . . . , r - 1 } ,

the parameters are denoted by θ , and α = { 1 , 2 , . . . , r } is the alphabet.

Markov sources : The distribution of a Markov source is defined as

p ( x 1 , x 2 , . . . , x n ) = p ( x 1 , . . . , x k ) i = k + 1 n p ( x i | x i - k i - 1 ) ,

where n k . We must define { p ( a 1 , a 2 , . . . , a k ) } ( a 1 , a 2 , . . . a k ) α k initial probabilities and transition probabilities, { p ( a k + 1 | a 1 k ) } . There are r k - 1 initial probabilities and ( r - 1 ) r k transition probabilities, giving a total of r k + 1 - 1 parameters. Note that

E { - log p ( x i | x i - k i - 1 ) } = H ( X i | X i - k i - 1 ) k H ¯ ( X ) .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask