<< Chapter < Page Chapter >> Page >
A statement of Shannon's noisy channel coding theorem.

It is highly recommended that the information presented in Mutual Information and in Typical Sequences be reviewed before proceeding with this document. An introductory module on thetheorem is available at Noisy Channel Theorems .

Shannon's noisy channel coding

The capacity of a discrete-memoryless channel is given by

C p X x p X x X Y
where X Y is the mutual information between the channel input X and the output Y . If the transmission rate R is less than C , then for any 0 there exists a code with block length n large enough whose error probability is less than . If R C , the error probability of any code with any block length isbounded away from zero.

If we have a binary symmetric channel with cross over probability 0.1, then the capacity C 0.5 bits per transmission. Therefore, it is possible to send 0.4 bits per channel through the channel reliably. This meansthat we can take 400 information bits and map them into a code of length 1000 bits. Then the whole code can be transmittedover the channels. One hundred of those bits may be detected incorrectly but the 400 information bits may be decodedcorrectly.

Got questions? Get instant answers now!

Before we consider continuous-time additive white Gaussian channels, let's concentrate on discrete-time Gaussian channels

Y i X i i
where the X i 's are information bearing random variables and i is a Gaussian random variable with variance 2 . The input X i 's are constrained to have power less than P
1 n i 1 n X i 2 P

Consider an output block of size n

Y X
For large n , by the Law of Large Numbers,
1 n i 1 n i 2 1 n i 1 n y i x i 2 2
This indicates that with large probability as n approaches infinity, Y will be located in an n -dimensional sphere of radius n 2 centered about X since y x 2 n 2

On the other hand since X i 's are power constrained and i and X i 's are independent

1 n i 1 n y i 2 P 2
Y n P 2
This mean Y is in a sphere of radius n P 2 centered around the origin.

How many X 's can we transmit to have nonoverlapping Y spheres in the output domain? The question is how many spheres of radius n 2 fit in a sphere of radius n P 2 .

M n 2 P n n 2 n 1 P 2 n 2

How many bits of information can one send in n uses of the channel?

2 logbase --> 1 P 2 n 2

Got questions? Get instant answers now!

The capacity of a discrete-time Gaussian channel C 1 2 2 logbase --> 1 P 2 bits per channel use.

When the channel is a continuous-time, bandlimited, additive white Gaussian with noise power spectral density N 0 2 and input power constraint P and bandwidth W . The system can be sampled at the Nyquist rate to provide power per sample P and noise power

2 f W W N 0 2 W N 0
The channel capacity 1 2 2 logbase --> 1 P N 0 W bits per transmission. Since the sampling rate is 2 W , then
C 2 W 2 2 logbase --> 1 P N 0 W bits/trans.xtrans./sec
C W 2 logbase --> 1 P N 0 W bits sec

The capacity of the voice band of a telephone channel can be determined using the Gaussian model. The bandwidth is 3000 Hzand the signal to noise ratio is often 30 dB. Therefore,

C 3000 2 logbase --> 1 1000 30000 bits sec
One should not expect to design modems faster than 30 Kbs using this model of telephone channels. It is alsointeresting to note that since the signal to noise ratio is large, we are expecting to transmit 10 bits/second/Hertzacross telephone channels.

Got questions? Get instant answers now!

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Digital communication systems. OpenStax CNX. Jan 22, 2004 Download for free at http://cnx.org/content/col10134/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital communication systems' conversation and receive update notifications?

Ask