<< Chapter < Page Chapter >> Page >

Claim 2 The probability Q ( T ( P x ) of the type class T ( P x ) obeys,

( n + 1 ) - ( r - 1 ) · 2 - n D ( P x Q x ) Q ( T ( P x ) ) 2 - n D ( P x Q x ) .

Consider now an event A that is a union over T ( P x ) . Suppose T ( Q ) A , then A is rare with respect to (w.r.t) the prior Q . and we have lim n Q ( A ) = 0 . That is, the probability is concentrated around Q . In general, the probability assigned by the prior Q to an event A satisfies

Q ( A ) = Σ x A Q ( x ) = Σ T ( P x ) A Q ( T ( P x ) ) = ˙ Σ T ( P x ) A 2 - n D ( P x Q ) = ˙ 2 - n · min p A D ( P Q ) ,

where we denote a n = ˙ b n when 1 n log ( a n b n ) 0 .

Fixed and variable length coding

Fixed to fixed length source coding : As before, we have a sequence x of length n , and each element of x is from the alphabet α . A source code maps the input x n r n to a set of 2 R n bit vectors, each of length R n . The rate R quantifies the number of output bits of the code per input element of x . We assume without loss of generality that R n Z . If not, then we can round R n up to R n , where · denotes rounding up. That is, the output of the code consists of n R bits. If n and R is fixed, then we call this a fixed to fixed length source code.

The decoder processes the n R bits and yields x ˆ α n . Ideally we have that x ˆ = x , but if 2 n R < r n then there are inputs that are not mapped to any output, and x ˆ may differ from x . Therefore, we want Pr ( x ˆ x ) to be small. If R is too small, then the error probability will go to 1. On the other hand, sufficiently large R will drive this error probability to 0 as n is increased.

If log ( r ) > R and Pr ( x ˆ x ) is vanishing as n is increased, then we are compressing, because 2 log ( r ) n = r n > 2 R n , where r n is the number of possible inputs x and there are 2 R n possible outputs.

What is a good fixed to fixed length source code? One option is to map 2 R n - 1 outputs to inputs with high probabilities, and the last output can be mapped to a “don't care" input.We will discuss the performance of this style of code.

An input x r n is called δ -typical if Q ( x ) > 2 - ( H + δ ) n . We denote the set of δ -typical inputs by T Q ( δ ) , this set includes the type classes whose empirical probabilities are equal (or closest) to the true prior Q ( x ) . Note that for each type class T x , all inputs x ' T x in the type class have the same probability, i.e., Q ( x ' ) = Q ( x ) . Therefore, the set T Q ( δ ) is a union of type classes, and can be thought of as an event A ( [link] ) that contains type classes consisting of high-probability sequences. It is easily seen that the event A contains the true i.i.d. distribution Q , because sequences whose empirical probabilities satisfy P x = Q also satisfy

Q ( x ) = 2 - H n > 2 - ( H + δ ) n .

Using the principles discussed in [link] , it is readily seen that the probability under the prior Q of the inputs in T Q ( δ ) satisfies Q ( T p ( δ ) ) = Q ( A ) 1 when n . Therefore, a code C that enumerates T Q ( δ ) will encode x correctly with high probability.

The key question is the size of C , or the cardinality of T Q ( δ ) . Because each x T Q ( δ ) satisfies Q ( x ) > 2 ( - H + δ ) n , and x T Q ( δ ) Q ( x ) 1 , we have | T Q ( δ ) | < 2 ( H + δ ) n . Therefore, a rate R H + δ allows near-lossless coding , because the probability of error vanishes(recall that Q ( ( T p ( δ ) ) C ) 0 , where ( · ) C denotes the complement).

On the other hand, a rate R H - δ will not allow lossless coding, and the probability of error will go to 1. We will see this intuitively. Because the type class whose empirical probability is Q dominates, a type class T x whose sequences have larger probability, e.g., Q ( x ) > 2 - ( H - δ ) n , will have small probability in aggregate. That is,

x : Q ( x ) > 2 - n ( H - δ ) Q ( x ) n 0 .

In words, choosing a code C with rate R = H - δ that contains the words x with highest probability will fail, it will not cover enough probabilistic mass.We conclude that near-lossless coding is possible at rates above H and impossible below H.

To see things from a more intuitive angle, consider the definition of entropy, H ( Q ) = - a α Q ( a ) log ( Q ( a ) ) . If we consider each bit as reducing uncertainty by a factor of 2,then the average log-likelihood of a length- n input x generated by Q satisfies

E [ - log ( Pr ( x ) ) ] = E [ - log ( i = 1 n P r ( x i ) ) ] = - i = 1 n E [ log ( Q ( x i ) ) ] = - i = 1 n a α Q ( a ) · log ( Q ( a ) ) = n H .

Because the expected log-likelihood of x is n H , it will take n H bits to reduce the uncertainty by this factor.

Fixed to variable length source coding : The near-lossless coding above relies on enumerating a collection of high-probability codewords T Q ( δ ) . However, this approach suffers from a troubling failure for x T Q ( δ ) . To solve this problem, we incorporate a code that maps x to an output consisting of a variable number of bits. That is, the length of the code will be approximately n H on average, but could be greater or lesser.

One possible variable length code is due to Shannon. Consider all possible x α n . For each x , allocate - log ( Q ( x ) ) bits to x . It can be shown that it is possible to construct an invertible (uniquely decodable)code as long as the length of the code l ( x ) in bits allocated to each x satisfies

x 2 - l ( x ) 1 .

This result is known as the Kraft Inequality. Seeing that

x 2 - l ( x ) = x 2 - - log ( Q ( x ) ) x 2 - ( - log ( Q ( x ) ) ) = x Q ( x ) = 1 ,

we see that the length allocation we suggested satisfies the Kraft Inequality. Therefore, it is possible to construct an invertible (and hence lossless) codewith lengths upper bounded by

l x = - log ( Q ( x ) ) - log ( Q ( x ) ) + 1 ,

and we have

E [ l ( x ) ] E [ - log ( Q ( x ) ) ] + 1 = n H + 1 .

This simple construction approaches the entropy up to 1 bit.

Unfortunately, a Shannon code is impractical, because it requires to construct a code book of exponential size | α | n . Instead, arithmetic codes  [link] are used; we discussed arithmetic codes in detail in class, but they appear in all standard text books and so we do not describe them here.

Questions & Answers

Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
Jyoti Reply
I only see partial conversation and what's the question here!
Crow Reply
what about nanotechnology for water purification
RAW Reply
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
what is the stm
Brian Reply
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
LITNING Reply
what is a peer
LITNING Reply
What is meant by 'nano scale'?
LITNING Reply
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
what is Nano technology ?
Bob Reply
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
Damian Reply
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
Stoney Reply
why we need to study biomolecules, molecular biology in nanotechnology?
Adin Reply
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
Adin
why?
Adin
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
Damian Reply
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
Praveena Reply
hi
Loga
what does nano mean?
Anassong Reply
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask