<< Chapter < Page | Chapter >> Page > |
Perhaps the most fundamental idea in communication theory is that arbitrary symbols may be represented by strings of binary digits. These strings are called binary words, binary addresses, or binary codes. In the simplest of cases, a finite alphabet consisting of the letters or symbols is represented by binary codes. The obvious way to implement the representation is to let the binary code be the binary representation for the subscript :
The number of bits required for the binary code is where
We say, roughly, that .
Octal Codes. When the number of symbols is large and the corresponding binary codes contain many bits, then we typically group the bits into groups of three and replace the binary code by its corresponding octal code. For example, a seven-bit binary code maps into a three-digit octal code as follows:
The octal ASCII codes for representing letters, numbers, and special characters are tabulated in Table 1 .
'0 | '1 | '2 | '3 | '4 | '5 | '6 | '7 | |
'00x | ␀ | ␁ | ␂ | ␃ | ␄ | ␅ | ␆ | ␇ |
'01x | ␈ | ␉ | ␊ | ␋ | ␌ | ␍ | ␎ | ␏ |
'02x | ␐ | ␑ | ␒ | ␓ | ␔ | ␕ | ␖ | ␗ |
'03x | ␘ | ␙ | ␚ | ␛ | ␜ | ␝ | ␞ | ␟ |
'04x | ␠ | ! | " | # | $ | % | & | ' |
'05x | ( | ) | * | + | , | - | . | / |
'06x | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
'07x | 8 | 9 | : | ; | < | = | > | ? |
'10x | @ | A | B | C | D | E | F | G |
'11x | H | I | J | K | L | M | N | O |
'12x | P | Q | R | S | T | U | V | W |
'13x | X | Y | Z | [ | \ | ] | ^ | _ |
'14x | ` | a | b | c | d | e | f | g |
'15x | h | i | j | k | l | m | n | o |
'16x | p | q | r | s | t | u | v | w |
'17x | x | y | z | { | | | } | ~ | ␡ |
Quantizers and A/D Converters. What if the source alphabet is infinite? Our only hope is to approximate it with a finite collection of finite binary words. For example, suppose the output of the source is an analog voltage that lies between and . We might break this peak-to-peak range up into little voltage cells of size and approximate the voltage in each cell by its midpoint. This scheme is illustrated in Figure 1 . In the figure, the cell is defined to be the set of voltages that fall between and :
The mapping from continuous values of V to a finite set of approximations is
That is, is replaced by the quantized approximation whenever lies in cell . We may represent the quantized values with binary codes by simply representing the subscript of the cell by a binary word. In a subsequent course on digital electronics and microprocessors you will study (analog-to-digital) converters for quantizing variables.
If , corresponding to a three-bit quantizer, we may associate quantizer cells and quantized levels with binary codes as follows:
This particular code is called a sign-magnitude code , wherein the most significant bit is a sign bit and the remaining bits are magnitude bits (e.g., and ). One of the defects of the sign-magnitude code is that it wastes one code by using 000 for 0 and 100 for-O. An alternative code that has many other advantages is the 2's complement code . The complement codes for positive numbers are the same as the sign-magnitude codes, but the codes for negative numbers are generated by complementing all bits for the corresponding positive number and adding 1:
Notification Switch
Would you like to follow the 'A first course in electrical and computer engineering' conversation and receive update notifications?