<< Chapter < Page Chapter >> Page >

Claim 2 The probability Q ( T ( P x ) of the type class T ( P x ) obeys,

( n + 1 ) - ( r - 1 ) · 2 - n D ( P x Q x ) Q ( T ( P x ) ) 2 - n D ( P x Q x ) .

Consider now an event A that is a union over T ( P x ) . Suppose T ( Q ) A , then A is rare with respect to (w.r.t) the prior Q . and we have lim n Q ( A ) = 0 . That is, the probability is concentrated around Q . In general, the probability assigned by the prior Q to an event A satisfies

Q ( A ) = Σ x A Q ( x ) = Σ T ( P x ) A Q ( T ( P x ) ) = ˙ Σ T ( P x ) A 2 - n D ( P x Q ) = ˙ 2 - n · min p A D ( P Q ) ,

where we denote a n = ˙ b n when 1 n log ( a n b n ) 0 .

Fixed and variable length coding

Fixed to fixed length source coding : As before, we have a sequence x of length n , and each element of x is from the alphabet α . A source code maps the input x n r n to a set of 2 R n bit vectors, each of length R n . The rate R quantifies the number of output bits of the code per input element of x . We assume without loss of generality that R n Z . If not, then we can round R n up to R n , where · denotes rounding up. That is, the output of the code consists of n R bits. If n and R is fixed, then we call this a fixed to fixed length source code.

The decoder processes the n R bits and yields x ˆ α n . Ideally we have that x ˆ = x , but if 2 n R < r n then there are inputs that are not mapped to any output, and x ˆ may differ from x . Therefore, we want Pr ( x ˆ x ) to be small. If R is too small, then the error probability will go to 1. On the other hand, sufficiently large R will drive this error probability to 0 as n is increased.

If log ( r ) > R and Pr ( x ˆ x ) is vanishing as n is increased, then we are compressing, because 2 log ( r ) n = r n > 2 R n , where r n is the number of possible inputs x and there are 2 R n possible outputs.

What is a good fixed to fixed length source code? One option is to map 2 R n - 1 outputs to inputs with high probabilities, and the last output can be mapped to a “don't care" input.We will discuss the performance of this style of code.

An input x r n is called δ -typical if Q ( x ) > 2 - ( H + δ ) n . We denote the set of δ -typical inputs by T Q ( δ ) , this set includes the type classes whose empirical probabilities are equal (or closest) to the true prior Q ( x ) . Note that for each type class T x , all inputs x ' T x in the type class have the same probability, i.e., Q ( x ' ) = Q ( x ) . Therefore, the set T Q ( δ ) is a union of type classes, and can be thought of as an event A ( [link] ) that contains type classes consisting of high-probability sequences. It is easily seen that the event A contains the true i.i.d. distribution Q , because sequences whose empirical probabilities satisfy P x = Q also satisfy

Q ( x ) = 2 - H n > 2 - ( H + δ ) n .

Using the principles discussed in [link] , it is readily seen that the probability under the prior Q of the inputs in T Q ( δ ) satisfies Q ( T p ( δ ) ) = Q ( A ) 1 when n . Therefore, a code C that enumerates T Q ( δ ) will encode x correctly with high probability.

The key question is the size of C , or the cardinality of T Q ( δ ) . Because each x T Q ( δ ) satisfies Q ( x ) > 2 ( - H + δ ) n , and x T Q ( δ ) Q ( x ) 1 , we have | T Q ( δ ) | < 2 ( H + δ ) n . Therefore, a rate R H + δ allows near-lossless coding , because the probability of error vanishes(recall that Q ( ( T p ( δ ) ) C ) 0 , where ( · ) C denotes the complement).

On the other hand, a rate R H - δ will not allow lossless coding, and the probability of error will go to 1. We will see this intuitively. Because the type class whose empirical probability is Q dominates, a type class T x whose sequences have larger probability, e.g., Q ( x ) > 2 - ( H - δ ) n , will have small probability in aggregate. That is,

x : Q ( x ) > 2 - n ( H - δ ) Q ( x ) n 0 .

In words, choosing a code C with rate R = H - δ that contains the words x with highest probability will fail, it will not cover enough probabilistic mass.We conclude that near-lossless coding is possible at rates above H and impossible below H.

To see things from a more intuitive angle, consider the definition of entropy, H ( Q ) = - a α Q ( a ) log ( Q ( a ) ) . If we consider each bit as reducing uncertainty by a factor of 2,then the average log-likelihood of a length- n input x generated by Q satisfies

E [ - log ( Pr ( x ) ) ] = E [ - log ( i = 1 n P r ( x i ) ) ] = - i = 1 n E [ log ( Q ( x i ) ) ] = - i = 1 n a α Q ( a ) · log ( Q ( a ) ) = n H .

Because the expected log-likelihood of x is n H , it will take n H bits to reduce the uncertainty by this factor.

Fixed to variable length source coding : The near-lossless coding above relies on enumerating a collection of high-probability codewords T Q ( δ ) . However, this approach suffers from a troubling failure for x T Q ( δ ) . To solve this problem, we incorporate a code that maps x to an output consisting of a variable number of bits. That is, the length of the code will be approximately n H on average, but could be greater or lesser.

One possible variable length code is due to Shannon. Consider all possible x α n . For each x , allocate - log ( Q ( x ) ) bits to x . It can be shown that it is possible to construct an invertible (uniquely decodable)code as long as the length of the code l ( x ) in bits allocated to each x satisfies

x 2 - l ( x ) 1 .

This result is known as the Kraft Inequality. Seeing that

x 2 - l ( x ) = x 2 - - log ( Q ( x ) ) x 2 - ( - log ( Q ( x ) ) ) = x Q ( x ) = 1 ,

we see that the length allocation we suggested satisfies the Kraft Inequality. Therefore, it is possible to construct an invertible (and hence lossless) codewith lengths upper bounded by

l x = - log ( Q ( x ) ) - log ( Q ( x ) ) + 1 ,

and we have

E [ l ( x ) ] E [ - log ( Q ( x ) ) ] + 1 = n H + 1 .

This simple construction approaches the entropy up to 1 bit.

Unfortunately, a Shannon code is impractical, because it requires to construct a code book of exponential size | α | n . Instead, arithmetic codes  [link] are used; we discussed arithmetic codes in detail in class, but they appear in all standard text books and so we do not describe them here.

Questions & Answers

how does Neisseria cause meningitis
Nyibol Reply
what is microbiologist
Muhammad Reply
what is errata
Muhammad
is the branch of biology that deals with the study of microorganisms.
Ntefuni Reply
What is microbiology
Mercy Reply
studies of microbes
Louisiaste
when we takee the specimen which lumbar,spin,
Ziyad Reply
How bacteria create energy to survive?
Muhamad Reply
Bacteria doesn't produce energy they are dependent upon their substrate in case of lack of nutrients they are able to make spores which helps them to sustain in harsh environments
_Adnan
But not all bacteria make spores, l mean Eukaryotic cells have Mitochondria which acts as powerhouse for them, since bacteria don't have it, what is the substitution for it?
Muhamad
they make spores
Louisiaste
what is sporadic nd endemic, epidemic
Aminu Reply
the significance of food webs for disease transmission
Abreham
food webs brings about an infection as an individual depends on number of diseased foods or carriers dully.
Mark
explain assimilatory nitrate reduction
Esinniobiwa Reply
Assimilatory nitrate reduction is a process that occurs in some microorganisms, such as bacteria and archaea, in which nitrate (NO3-) is reduced to nitrite (NO2-), and then further reduced to ammonia (NH3).
Elkana
This process is called assimilatory nitrate reduction because the nitrogen that is produced is incorporated in the cells of microorganisms where it can be used in the synthesis of amino acids and other nitrogen products
Elkana
Examples of thermophilic organisms
Shu Reply
Give Examples of thermophilic organisms
Shu
advantages of normal Flora to the host
Micheal Reply
Prevent foreign microbes to the host
Abubakar
they provide healthier benefits to their hosts
ayesha
They are friends to host only when Host immune system is strong and become enemies when the host immune system is weakened . very bad relationship!
Mark
what is cell
faisal Reply
cell is the smallest unit of life
Fauziya
cell is the smallest unit of life
Akanni
ok
Innocent
cell is the structural and functional unit of life
Hasan
is the fundamental units of Life
Musa
what are emergency diseases
Micheal Reply
There are nothing like emergency disease but there are some common medical emergency which can occur simultaneously like Bleeding,heart attack,Breathing difficulties,severe pain heart stock.Hope you will get my point .Have a nice day ❣️
_Adnan
define infection ,prevention and control
Innocent
I think infection prevention and control is the avoidance of all things we do that gives out break of infections and promotion of health practices that promote life
Lubega
Heyy Lubega hussein where are u from?
_Adnan
en français
Adama
which site have a normal flora
ESTHER Reply
Many sites of the body have it Skin Nasal cavity Oral cavity Gastro intestinal tract
Safaa
skin
Asiina
skin,Oral,Nasal,GIt
Sadik
How can Commensal can Bacteria change into pathogen?
Sadik
How can Commensal Bacteria change into pathogen?
Sadik
all
Tesfaye
by fussion
Asiina
what are the advantages of normal Flora to the host
Micheal
what are the ways of control and prevention of nosocomial infection in the hospital
Micheal
what is inflammation
Shelly Reply
part of a tissue or an organ being wounded or bruised.
Wilfred
what term is used to name and classify microorganisms?
Micheal Reply
Binomial nomenclature
adeolu
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask