Section 15 Digital Communications
DEFINITION. Information: Information is defined as knowledge or intelligence communicated or received DEFINITION. Data : Information, for example, numbers, text, images, and sounds, in a form that is suitable for storage in or processing by a computer. DEFINITION. Digital Transmission : is the transmittal of digital pulses between two or more points in a communication system. DEFINITION. Digital Radio is the transmittal of digitally modulated analog carrier between two or more points in a communication system.
A. INFORMATION THEORY 1. Information Measure The information send from a digital source when the i th message is transmitted is given by
Ii = logb (1/Pi) = -logb (Pi)
where: P i = probability of the i th message
2. Average Information (Entropy) In general, the information content will vary from message to message because the probability of transmitting the n th message will not be equal. Consequently, we need an average information measure for the source, considering all the possible message we can send.
H=
∑=0
For your information... If the symbols have the same probability of occurrence (P 1= P2 = P1 = ... Pn) then the entropy is maximum (H = log b N). 3. Relative Entropy The ratio of the entropy of a source to the maximum value the entropy could take for the same source symbol.
HR = H/HMAX
HMAX = logb N; N = total number of symbols
4. Redundancy
r = 1 - HR 5. Rate of Information
R = H/T
T=
∑=1
Sample Problem:
A telephone telephone touch-tone touch-tone keypad keypad has the digits digits 0 to 9, plus plus the * and # keys. keys. Assume Assume the probability of sending * or # is 0.005 and the probability of sending 0 to 9 is 0.099 each. If they keys are pressed at a rate of 2 key/s, compute the entropy and data rate for this source. Solution: I0 = I1 = I2 = ... I9 = log2 (1/0.099) = 3.34 bits
H=
I * = I# = log2 (1/0.005) = 7.64 bits
∑ = P I + P I + P I ... P I 0 0
1 1
2 2
# #
Since: P0I0 = P1I1 = P2I2 ... P9I9 and P *I* = P#I# = 10 {0.099(3.34)} + 2{0.005(7.64)} = 3.38 bits/key R = H/T = H x r = 3.38 bits/key x 2 keys/sec = 6.67 bps Symbol
x1 x2 x3 x4 x5 x6 x7
Probability of Occurence P(Xi) 0.21 0.14 0 .09 0.11 0.15 0.18 0.12
Time required to transmit the symbol x i 10 µs 15 µs 20 µs 30 µs 25 µs 15 µs 25 µs
Determine the following: a. Entropy (H) b. Relative Entropy (H R) c. Rate of Information Solution: A. Entropy (H)
H=
∑ = 0.21 log (1/0.21) + 0.14 log (1/0.14) + 0.09 log (1/0.009) 2
2
2
+ 0.11 log2 (1/0.11) + 0.15 log 2 (1/0.15) + 0.18 log 2 (1/0.18) + 0.12 log 2 (1/0.12) = 2.76 bits/symbol
B. Relative Entropy (H R) HMAX = log2 N Max entropy = log2 7 = 2.81 bits/symbol HR = H/HMAX = 2.76/2.81 = 0.98 C. Rate of Information
∑7=1
T= = 0.21 (10) + 0.14 (15) + 0.09 (20) + 0.11 (30) + 0.15 (25) +0.18 (15) + 0.12 (25) = 18.75 µ sec R = H/T = 2.76 bits / 18.75 µ sec = 147 kbps
Parameter Code Word Length Average Code Word Length Coding Efficiency Coding Efficiency
Equation l = log b (M) L= Ŋ = (Lmin / L) x 100% γ = 1 - ŋ
∑=1
In the equation I i = logb (1/PI)
If b = 2; the unit of information is bits If b = 10; the unit of information is dit/Hartley/decit If b = e; the unit of information is nats/Hepits 1 Hartley = 3.32 bits and 1 nat = 1.443 bits
Sample Problem: Calculate the coding efficiency in representing the 26 letters of the alphabet using a binary and decimal system.
Solution: l = log2 (26) = 4.7 bits 5 bits must be used ŋ = (4.7/5)x100 = 94% l = log10 (26) = 1.415 dits 2 dits must be used ŋ = (1.415/2)x100 = 71 % This proves the fact that binary coding is more efficient than decimal coding. D. CHANNEL CLASSIFICATIONS
1. Lossless Channels A channel described by a channel matrix with only one non-zero element in each column. 2. Deterministic Channel Describe by a channel matrix with only one non-zero element on each row. 3. Noiseless Channel A channel which is both lossless and deterministic. E. CHANNEL CAPACITY
The maximum rate at which information can be transmitted through a channel 1. For Lossless Channels
C = log 2 (N)
The lossless channel capacity is equal to source entropy, and no source information is loss in transmission. 2. For Deterministic Channel
C2 = log2 (M) The deterministic channel capacity is equal to destination entropy. Each member of the source is set is uniquely associated with one, and only one, member of the destination alphabet. 3. For Noiseless Channel
C3 = log2 (M) = log 2 (N) 4. For Additive White Gaussian Noise Channel (AWGN)
C4 = (1/2) log 2 (1 + S/N) Note : The units of the previous equation are on per sample basis. Since there are 2BW (Nyquist Sampling Theorem) samples per unit time, the capacity per unit time can be written as C = 2BWC4 5. Shannon Limit For Information Capacity
C = 2BWC4 = BW log2 (1 + S/N) 6. Shannon-Hartley Theorem
C = 2BWC2 = 2BW log2 (M) Where: C = channel capacity in bps N = is the number of input symbols M = is the number of output symbols BW = channel bandwidth in Hz
ECE Board Exam: April 2003 A binary digital signal is to be transmitted at 10 kbits/s, what is the absolute minimum bandwidth is required to pass the fastest information change undistorted?
Solution: From deterministic channel capacity;
C = 2BW log 2 (M) M = 2 for binary signalling BW = C/2 = 10000/2 = 5 KHz ECE Board Exam: April 2003 What is the bandwidth needed to support a capacity of 20000 bits/s (using Shannon’s theory), when the ratio of power to noise is 200? Also compute the information density?
Solution: C = BW log2 (1 + S/N)
BW
= c/{log 2 (1+S/N)}
BW = 20000/[log 2 (1+200)] = 2615.54 Hz Ŋ = C/BW = log 2 (1+S/N) = 20000 bps/2615.54 Hz = 7.65 bps/Hz ECE Board Exam: April 2003 What is the channel capacity for a signal power of 200 W, noise power of 10 W and a bandwidth of 2 KHz of a digital system? Also calculate the spectrum efficiency?
Solution: C = BW log2 (1+S/N) = 2x10 3 log2 (1+200/10) = 8.779 kbps Ŋ = C/BW = log 2 (1+S/N) = 8.779 kbps/2 KHz = 4.4 bps/Hz Sample Problem: Consider a digital source that converts an analog signal to digital form. If an input analog signal is sampled at 1.25x the Nyquist rate and each sample is quantized into one 256 equally likely levels.
a. What is the information rate of this source? b. Calculate the min bit error rate of this source if transmitted overn an AWGN channel with a BW = 10 KHz and 20 dB S/N rati. c. Find the S/N in dB require for error free transmission if the BW = 10 KHz d. Find the BW required for error free transmission if the S/N is 20 dB. Assume that the successive samples are statistically independent. Solution: a. Information rate H = log2 (256) = 8 bits/sample; Nyquist Rate = 2f m (max) = 2 (4 KHz) = 8 KHz Actual rate = 1.25 x R NYQUIST = 1.25 (8 KHz) = 10 KHz = 10 kilosamples/sec
Information Rate = H x R ACTUAL = 8 bits/sample {10x10 3 samples/sec} = 80 kbps b. Minimum BER Channel limitation due to noise: C = BW log2 (1+S/N) = 10 KHz log 2 (1+100) = 66.88 kbps BER = RINFO – C = 80 kbps – 66.88 kbps = 13.12 kbps It cannot be transmitted without errors c. S/N required for error free transmission Channel capacity ≥ Info Rate C ≥ RINFO 80 kbps ≥ 10 KHz log 2 (1+S/N) S/N ≥ 255 or 24.1 dB
d. BW required for error free transmission Channel capacity ≥ Info rate C≥ RINFO 80 kbps ≥ BW log2 (1+101) BW ≥ 12.02 KHz
This means to provide an error free transmission the bandwidth must be at least 12.02 KHz or the S/N ratio must be greater than 24.1 dB. F. DIGITAL SIGNAL LINE ENCODING FORMAT
Line encoding is the method used for converting a binary information sequence into a digital signal in a digital communication system.
1. Types of Signalling i. Unipolar Signalling Binary 1 is represented by a high level and binary 0 by a zero level. ii. Polar Signalling Binary 1’s and 0’s are represented by equal positive negative levels. iii. Bipolar (pseudoternary) signalling Binary 1’s are represented by alternatively positive or negative values. The binary 0 is represented by a zero level. iv. Manchester Signalling (Split-phase encoding) Each binary 1 is represented by a positive half-bit period pulse followed by a negative half-bit period pulse. Similarly, a binary 0 is presented by a negative half-bit period pulse followed by a positive half-bit period pulse.
G. DEFINITION OF DIGITAL ENCODING FORMAT 1. Non-Return to Zero (NRZ) i. Non-Return to Zero-Level (NRZ-L) Where L denotes positive logical level assessment. 1 = High Level 0 = Low Level
ii. Non-Return to Zero-Level (NRZ-M) Where M denotes inversion on mark 1 = Transition at beginning of interval 0 = No transition iii. Non-Return to Zero-Level (NRZ-S) Where S denotes inversion on space using negative logic 1 = No change 0 = Transition at the beginning of interval 2. Return to Zero (RZ) 1 = Transition from High to Low in middle of interval 0 = Low level 3. Biphase i. Biphase-Level (Manchester) 1 = Transition from High to Low in middle of interval
0 = Transition from Low to High in middle of interval ii. Biphase-Mark Always a transition at the beginning of interval 1 = Transition in middle of interval 0 = No transition in middle of interval iii. Biphse-Space Always a transition at beginning of interval 1 = No transition in middle of interval 0 = Transition in middle of interval 4. Differential Manchester 1 = No transition in middle of interval 0 = Transition at beginning of interval 5. Delay Modulation (Miller) 1 = Transition in middle of interval 0 = No transition if followed by 1 Transition at end of interval if followed by 0 6. Bipolar-AMI (Alternate Mark Inversion) 1 = Pulse in first half of bit level, alternating polarity pulse to pulse 0 = No pulse
The term bipolar has two different conflicting definitions :
In the space communication industry, polar NRZ is sometimes called bipolar NRZ, or simply bipolar. In the telephone industry, the term bipolar denotes pseudoternary signalling, as in the T1 bipolar RZ signalling
S o many names ...
Polar NRZ is also called NRZ-L. Bipolar NRZ is called NRZ-M. Negative Logic Bipolar NRZ is called NRZ-S. Bipolar RZ is also called BPRZ, RZ-AMI, BPRZ-AMI, AMI or simply bipolar. Manchester NRZ is also Manchester code, or Biphase-L, for biphase with normal logic level. Biphase-M is used for encoding SMPTE time-code data for recording on videotapes.
BW Efficiency of Popular Line Codes Spectral Signalling Unipolar Polar Bipolar
Code Type
Bandwidth
NRZ RZ NRZ RZ Manchester AMI
f b/2 f b f b/2 f b f b f b/2
H. SIGNAL ELEMENT VS. DATA ELEMENT
Efficiency (bps/Hz) 2 1 2 1 1 2
Relation between Bit Rate and Baud Rate
f b = n x f B
where:
n = (# of data segment)/(# of signal element) f b = bit rate in bps f B = Baud rate in Baud
Sample Problem:
A signal is carrying data in which 4 data element is encoded as one signal element. If the bit rate is 100 kbps, what is the average value of the baud rate? Solution: f B = f b / n = (100 kbps) / 4 = 25 kBaud I. DIGITAL MODULATION
1. Amplitude Shift Keying (ASK) Digital Amplitude Modulation is simply a double-sideband, full-carrier amplitude modulation where the input modulating signal is a binary waveform.
a.k.a Continuous-wave modulation, On-Off Keying (OOK)
Implementation of binary ASK
2. Frequency Shift Keying Frequency Shift Keying is a form of constant-amplitude angle modulation similar to conventional frequency modulation except that the modulating signal is a binary signal that varies between two discrete voltage levels.
Implementation of binary FSK
i. Bandwidth Consideration
BW = 2 (f b + ∆f) ii. FSK Receiver Noncoherent FSK Demodulator
Coherent FSK Demodulator
Minimum Shift Keying (MSK) With MSK, the mark and space frequencies are selected such that they are separated from the center frequency by an odd exact multiple of one-half of the bit rate.
f m – f s = ∆f = n x (f b/2)
where: n = positive odd integer
Sample Problem: Calculate the frequency shift (Deviation) between mark and space for GSM cellular radio system that uses Gaussian MSK (GMSK) with a transmission rate of 270.833 kbps.
Solution: GMSK is a special case of FSK where n = 1 f m – f s = (270.833 kbps) / 2 = 135.4165 KHz 2. Phase Shift Keying (PSK) Phase Shift Keying is a form of angle-modulated, constant amplitude digital modulation similar to conventional phase modulation except that with PSK the input signal is a binary digital signal and limited number of output phase are possible.
a. Binary Phase Shift Keying (BPSK) With BPSK, two output phases are possible for a single carrier frequency. Implementation of binary PSK
Phasor and Constellation Diagram
Truth Table and Minimum Nyquist Bandwidth Binary Input Logic 0 Logic 1
Output Phase 180° 0°
f N = f b Sample Problem: Determine the minimum Nyquist bandwidth and the Baud rate for a BPSK modulator with a carrier frequency of 70 MHz and an input bit rate of 10 Mbps.
Solution: Nyquist BW
Baud Rate
f N = f b = 10 MHz
f B = 10 MBaud
M-ary Encoding M-ary is a term derived from the binary. M is simply a digit that represents the number of conditions or combinations possible for a given number of binary variables.
N = log2 (M)
N 1 2 3 4 5
M 2 4 8 16 32
Where: N = # of bits per symbol M = # of output conditions or symbols possible w/ N bits
Samples Problem: How many bits are needed to address 256 different level combinations?
Solution: N = log2 (M) = log 2 (256) = [log 10 256] / [log 102] = 8 bits b. Quarternary Phase Shift Keying (QPSK) QPSK is form of angle-modulated, constant-amplitude digital modulation where four output phase are possible ( M = 4)
Implementation of Quarternary PSK
Phasor and Constellation Diagram
Truth Table and Minimum Nyquist Bandwidth Binary Input Q 0 0 1 1
QPSK Output
I 0 1 0 1
-135° -45° +135° +45°
f N = f B / 2 Sample Problem: Calculate the minimum double-sided Nyquist BW and the Baud for a QPSK modulator with an input data rate equal to 40 Mbps and a carrier frequency of 110 MHz.
Solution: Nyquist BW (QPSK) f N = f b/2 = 40 Mbps/2 = 20 MHz
Baud Rate Baud = f b / 2 = 40 Mbps/2 = 20 MBaud
c. Eight Phase Shift Keying (8-PSK) 8-PSK is another form of angle-modulated, constant-amplitude digital modulation where eight output phases are possible (M = 8) Phasor and Constellation Diagram
Truth Table and Minimum Nyquist Bandwidth
Q 0 0 0 0 1 1 1 1
Binary Input I 0 0 1 1 0 0 1 1
C 0 1 0 1 0 1 0 1
8-PSK Output
-112.5° -157.5° -67.5° -22.5° +112.5° +157.5° +67.5° +22.5°
f N = f b / 3
Sample Problem:
Calculate the minimum double-sided Nyquist BW and the Baud for an 8-PSK modulator with an input data rate equal to 25 Mbps and a carrier frequency of 45 MHz. Solution: Nyquist BW (8-PSK)
Baud Rate
f N = f b/3 = 25 Mbps/3 = 8.33 MHz
f B = f b/3 = 25 Mbps/3 = 8.33 MBaud
d. Sixteen Phase Shift Keying (16-PSK) 16-PSK is another form of angle-modulated, constant-amplitude digital modulation where sixteen output phase are possible. (N=4 & M = 16)
Phasor and Constellation Diagram
0 0 0 0 0 0 0 0
Truth Table and Minimum Nyquist Bandwidth
Bit Code 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
Phase 11.25° 33.75° 56.25° 78.75° 101.25° 123.75° 146.25° 168.75°
0 0 0 0 0 0 0 0
Bit Code 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1
0 1 0 1 0 1 0 1
Phase 191.25° 213.75° 236.25° 258.75° 281.25° 303.75° 326.25° 348.75°
f N = f b/3 3. Quadrature Amplitude Modulation (QAM) QAM is a form of digital modulation where the digital information is contained in both the amplitude and phase of the transmitted carrier. a. Eight Quadrature Modulation (8-QAM) 8-QAM is an M-ary encoding technique where M=8. 2 amplitudes and 4 phase are used to give 8 different symbols.
Phasor and Constellation Diagram
b. Sixteen Quadrature Modulation (16-QAM) 8-QAM is an M-ary encoding technique where M=16 3 amplitudes and 12 phases are used to give 16 different symbols 4 amplitudes and 8 phases are used to give 16 different symbols 2 amplitudes and 8 phases are used to give 16 different symbols
Phasor and Constellation Diagram
B. SPECTRAL EFFICIENCY OF DIGITAL SYSTEMS
Spectral Efficiency (Bandwidth Efficiency) or information density is an indication of how a certain modulation scheme is efficiently utilizing its bandwidth. In terms of Channel Capacity and Bandwidth In terms of Signal to Noise Ratio In terms of Bit Rate and Nyquist BW (Baud Rate)
Ŋ = C/BW Ŋ = log (1 + S/N) Ŋ = f b/f B
Sample Problem: Determine the bandwidth efficiency for the following modulation scheme
a. BPSK, f b = 15 Mbps b. QPSK, f b = 20 Mbps c. 8-PSK, f b = 28 Mbps d. 8-QAM, f b = 30 Mbps e. 16-PSK, f b = 40 Mbps f. 16-QAM, f b = 42 Mbps
Ŋ = f b/BW
Solution: a. BPSK
b. QPSK
Ŋ = f b/BW = 15 Mbps/15 MHz = 1 bps/Hz
Ŋ = f b/BW = [2(20Mbps)]/(20 MHz) = 2 bps/Hz = 2 bps/Hz
c. 8-PSK
d. 8-QAM
Ŋ = f b/BW = [3(28 Mbps)]/28 MHz = 3 bps/Hz
Ŋ = f b/BW = [3(30 Mbps)]/[30 MHz] = 3 bps/Hz
e. 16-PSK
f. 16-QAM
Ŋ = f b/BW = [4(40 Mbps)]/[40 MHz] = 4 bps/Hz
Ŋ = f b/BW = [4(42 Mbps)]/[42 MHz] = 4 bps/Hz
Summary of Various Digital Modulation Systems System
BPSK QPSK 8-PSK 8-QAM 16-PSK 16-QAM 32-QAM 64-QAM
# Bits Encoded per Symbol 1 2 3 3 4 4 5 6
Minimum Nyquist BW (Baud Rate) f b f b /2 f b /3 f b /3 f b /4 f b /4 f b /5 f b /6
Spectral Efficiency (bps/Hz) 1 2 3 3 4 4 5 6
K. ERROR PROBABILITIES FOR DIGITAL COMMUNICATION SYSTEMS 1. Coherent Systems with Additive White Gaussian Noise (AWGN) Channels
√ 2
Pe = Q
2. Non-Coherent Systems with Additive White Gaussian Noise (AWGN) Channels
Pe = 0.5e -(Eb/2No) Where: Eb = energy per bit in J/bps No = noise density in W/Hz
Sample Problem: Calculate the probability of error for a non-coherent FSK system if the carrier power is 10-13 W, bit rate of 30 kbps, BW of 60 KHz and noise power of 10 -14 W.
Solution: Eb = Po/f b = [10-13 W] / 30 kbps = 3.3 attoJoules/bit NO = N/BW = 10-14 W/60 KHz = 166.67x10 -21 Watt/Hz Pe = 0.5e-(Eb/2No) = Pe = 0.5e-0.5((3.3x10^-18/167.67x10^-21) = 2.51x10-5 This probability of error means that 25 bits will be expected to be corrupted (in error) for every 1 million bits transmitted. L. ERROR DETECTIONS 1. Redundancy Redundancy involves transmitting each character twice. If the same character is not received twice in succession, a transmission error has occurred. 2. Echoplex Echoplex involves the receiving device echoing the received data back to the transmitting device. The transmitting operator can view the data, as received and echoed, making corrections as appropriate. 3. Exact-count encoding The number of 1s in each character is the same and therefore a simple count of number of 1s received in each character can determine if a transmission error has occurred. 4. Parity Checking Parity checking is by far the most commonly used method for error detection and correction, as it is used in asynchronous devices such as PCs. Parity involves the transmitting terminal’s appending one or more parity bits to the data set in order to create odd parity or even parity. Dimensions of Parity Checking i. Vertical Redundancy Checking (VRC) VRC entails the appending of a parity bit at the end of each transmitted character or value to create an odd or even total mathematical bit value. ii. Longitudinal Redundancy Checking (LRC) or Block Checking Character (BCC)
LRC adds another level of reliability, as data is viewed in a block or data set, as though the receiving device were viewing data set in matrix format. Also known as checksum, the LRC is sent as an extra character at the end of each data block. 5. Cyclic Redundancy Checking (CRC) CRC validates transmission of a set of data, formatted in a block or frame, through the use of a unique mathematical polynomial know to both transmitter and receiver. The result of that calculation is appended to the block or frame or text as either a 16- or 32bit value. i. CRC Encoding Procedures
1. Multiply i(x) by x n-k (puts zeroes in (n-k) low-order positions). 2. Divide xn-k by g(x). xn-k i(x) = g(x) q(x) = r(x) 3. Add remainder r(x) to x n-k i(x) (puts check bits in the (n-k) low-order positions). b(x) = xn-k i(x) + r(x) n = number of bits in a codeword k = information bits q(x) = quotient r(x) = remainder g(x) = generator polynomial i(x) = information polynomial b(x) = transmitted information ii. Standard CRC Polynomial Codes Name CRC-8 CRC-10 CRC-16 CCITT-16 CCITT-32
Used In ATM Header error check ATM CRC Bisync HDLC, XMODEM, V.41 IEEE 802, V.32
M. ERROR CORRECTIONS 1. Symbol Substitution With symbol substitution, if a character is received in error, rather than revert to a high level of error correction or display the incorrect character.
2. Retransmission (ARQ) Retransmission, as the name implies, is resending a message when it is received in error and the receive terminal automatically calls for retransmission of the entire message. 3. Forward Error Correction (FEC) FEC involves the addition of redundant information embedded in the data set in order that the receiving device can detect errors and correct for them without requiring a retransmission. The most commonly employed technique is Hamming Code. Hamming Distance The number of bit position in which two codeword differs is called Hamming distance.
2n ≥ m + n + 1
n= number of Hamming Bits m = Number of bits in the data character
ECE Board Exam: APRIL 2003 How many Hamming Bits would be added to a data block containing 128 bits?
Solution: Number of Hamming Bits: 2 n ≥ m + n + 1 For n = 6 26 = 64; m + n + 1 = 128 + 6 + 1; 64 < 135 For n=7 27 = 128; m + n + 1 = 128 + 7 + 1; 128 < 136 For n=8 28 = 256; m + n + 1 ; 256 > 138 (satisfied) Answer: Hamming Bits = 8
Sample Problem:
Calculate the Hamming distance to detect and correct 3 single-bit errors that occurred during transmission. Also compute for the number of Hamming bits for a 23 bit data string. Solution: Required Hamming distance for error detection Hd = d+1 = 3+1 = 4 Required Hamming distance for error correction Hd = 2d+1 = (2x3) + 1 = 7 Number of Hamming bits 2n ≥ m+n+1 For n = 4 24 = 16; m+n+1 = 23 + 4 +1; 16 < 28 For n = 5 25 = 32; m+n+1 =23+5+1; 32 > 29 n=5 Answer: Hamming bits = 5
To detect d si ng le-bit errors , you need a Hamming dis tance of d+1 code because with such a code there is no way that d single bit errors can change a valid codeword into another valid codeword. Similarly, to correct d sing le-bit errors , you need a dis tance 2d+1 code because that way the legal codeword are so far apart that even with d changes. The original codeword is still closer than any other codeword. Hamming codes can only correct single bit errors. However there is a trick that can be used to permit Hamming codes to correct burst errors.