Shannon’s Limit, Nyquist Limit, Bit Stuffing and Hamming Code

So, I’ve remembered that I was taking a network class while back at the university and now I am learning some of those things again and I want to share with all of you.

Why? Because I feel that I didn’t get much while back at the university and just before I go to sleep today, it all came back to me and suddenly I understand those things more than I was years ago.

So it’s about Shannon’s Limit, Nyquist Limit, Bit Stuffing and Hamming Code.

Attention:
To my frequent readers who are not familiar with the OSI layer, especially on Physical and Data-Link Layer, or non-technical person who always have some fun reading my posts, I need to notify you that the following post will be very technical and you can skip anytime you want. However, I’ll try as simple as possible to explain what I mean and probably you can get something from it. Just be sure that you’re okay with this and read ahead.

Let’s go with the first one. Shannon’s Limit (or Shannon’s Capacity).

Shannon’s Capacity

Shannon’s Capacity, together with Nyquist Limit, determine how often/much data can we send over a particular link/connection. You can see the precise explanation on Wikipedia.

So, let’s start with a little math. Let’s say we have a bandwidth (B), signal strength (S) and noise strength (N). So we can say that S/N as Signal-to-Nois Ratio (SNR). Hence, the Shannon’s Capacity (C) can be compute using formula below:

C = B log2(1 + S/N) bits/sec

so, for example, if you have a bandwidth B = 10 MHz with a Signal-to-ratio (SNR) = 11.76 dB, then the Shannon’s Capacity will be equal to 40 Mbps.

Nyquist Limit

The Nyquist Limit, along with Shannon’s Capacity, is used to determine the how much data can we send over a particular link/connection. See the technical explanation on Wikipedia.

To compute the amount of data can be sent within a second, we can use the Nyquist Limit.

Now into the math. Assume that we know the bandwidth (B) and the number of signal level (V). Hence, the amount of data can be sent within a second is as follow:

R = 2B log2(V)

where R is the transmission’s rate.

so, 3 MHz bandwidth with eight-level digital signal are used, we get the transmission’s rate at 18,000,000 bits per second.

Bit Stuffing

Bit Stuffing is the insertion of non-information bits into data. There are many purposes of this method but commonly used in conjunction with bandwidth limitation. Bit Stuffing works by inserting a ‘0’ bit in a five or six sequential ‘1’s.

For example, let’s say we have a data:

1110110001011111100011001

After bit stuffing, we get the data as follow:

11101100010111111 0 00011001

You should notice that I put an additional 0 in the middle of it (the one that got space left and right). That’s the bit stuffing.

Hamming Code

Hamming Code is commonly use to detect errors while transmitting data and provide correction on the data. It can detect up to two-bit errors or correct one-bit errors without detection of uncorrected errors. Here’s how it works:

  1. n = 2^k - k - 1. So if n = 4 then k = 3.
  2. All bits positions that are powers of 2 are parity bits so we can put check bits in position p that are powers of 2, starting with position 1.
  3. Check bit in position p is a parity of positions with a p term in their values.

The parity bits have rules as follow (from Wikipedia):

  1. Parity bit 1 covers all bits positions which has the least significant bit set: 1 (the parity itself), 3, 5, 7, 9, etc.
  2. Parity bit 2 covers all bit positions which has the second least significant bit set: 2 (the parity itself), 3, 6, 7, 10, etc.
  3. Parity bit 4 covers all bit positions which has the third least significant bit set: 4 – 7, 12 – 15, 20 – 23, etc.
  4. Parity bit 8 covers all bit positions which has the fourth least significant bit set: 8 – 15, 24 – 31, etc.
  5. In general, each parity bits covers all bits where the bitwise AND of the parity position and the bit position is non-zero.

Now’s the fun part. For example, you have some data 1001 with 3 check bits. We can write the position of the bit with binary so instead of writing 1, 2, 3, 4, 5, etc., we can write 1, 10, 11, 100, 101, etc. Since we have 3 check bits, then according to the parity bits rule (remember that the parity bits is the one that powers of 2 which is have only one 1 as the most significant bit), the position 1, 2, and 4 are the parity bits.

Let’s write down in 7 bits:

x x 1 x 0 0 1
_ _ _ _ _ _ _
1 2 3 4 5 6 7

as you can see here that we reserve bit 1, 2, and 4 as x. Now we compute the parity bit in position 1, 2, and 4.

Parity bit 1: covers position 1, 3, 5, 7, hence x = 1 + 0 + 1 = 0.
Parity bit 2: covers position 2, 3, 6, 7, hence x = 1 + 0 + 1 = 0.
Parity bit 4: covers position 4, 5, 6, 7, hence x = 0 + 0 + 1 = 1.

So, the 7 bits transmitted are 0011001.

Pretty easy, right? ;)

Credits:
Thanks to David Wetherall from University of Washington who nailed them down to the bottom and make everything quite easy enough to understand.

Also read...

Comments

  1. Hello Mr Setyadi,

    I would like to discuss a partnership opportunity between our sites. Is there an email address I can contact you in private?

    Thanks,
    Eleftheria Kiourtzoglou

    Head of Editorial Team
    Java Code Geeks
    email: [dot][at]javacodegeeks[dot]com

    Reply

Leave a Reply to Eleftheria K. Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.