Binary Error Correcting Codes - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Binary Error Correcting Codes

Description:

For security the topic is called Cryptology. Repetition Code. The easiest way to increase accuracy is repetition. ... For instance we can repeat the message 3 ... – PowerPoint PPT presentation

Number of Views:350
Avg rating:3.0/5.0
Slides: 22
Provided by: cary72
Category:

less

Transcript and Presenter's Notes

Title: Binary Error Correcting Codes


1
Binary Error Correcting Codes
2
Different Types of Codings
  1. For efficiency the topic is called Data
    compression.
  2. For accuracy the topic is called
    Error correction
    coding.
  3. For security the topic is called Cryptology.

3
Repetition Code
  • The easiest way to increase accuracy is
    repetition. For instance we can repeat the
    message 3 times.
  • 1011 ? 111000111111
  • This can correct 1 (isolated) error out of three
    consecutive bits sent.
  • However this method is not practical
  • transmission rate is only 33.33
  • most errors are not isolated but in a burst,
  • normally (we hope that) the channel is not so
    noisy that we need to give up so much efficiency.

4
Error Correcting Codes
Our goal is therefore to find a coding scheme
such that the transmission rate (or efficiency)
is very high and the error correction rate is
not too small (e.g. gt1) This requires a lot
more knowledge in mathematics and some ingenuity.
5
Binary Arithmetic
Binary digits are 0 and 1. Addition 0
0 0, 0 1 1, 1 1 0 Subtraction
same as addition (i.e. a - b a
b) Multiplication 00 0, 01 0, 11 1
Notation 0, 1n is the set of all n-tuples in
0s and 1s This actually forms a vector space
over 0, 1 (i.e. the only scalars are 0 and 1)
6
Binary Codewords
A binary codeword is a string of binary digits,
or more precisely an n-tuple of binary digits.
e.g. 00110111 is a 8-tuple codeword A
block code is a collection of codewords all with
the same length (all codes in this presentation
are block codes). A binary (block) code C of
length n is a collection of binary codewords all
of length n and it is called linear if it is a
subspace of 0, 1n. In other words, 0,
, 0 is in C and the sum of two codewords is
also a codeword.
7
Advantages of Linear Codes - easy to encode and
decode.
Encoding is a linear transformation from Rk to Rn
e.g.
Hence the encoding can be done by a matrix
multiplication. i.e.
8
Advantages of Linear Codes - easy to encode
and decode.
Since the transformation is from a lower
dimension space to a higher one, it will never be
onto. In fact it will be useless if it is onto
we need space to accommodate the errors.
The encoding process
9
Advantages of Linear Codes - easy to encode
and decode.
Decoding is not a linear transformation but part
of it can be done by a matrix multiplication. The
idea is called nearest neighbor decoding.
For example, if you see beoutiful, you will
decode it as beautiful because this is the
closest match.
The encoding process
10
Advantages of Linear Codes - easy to encode
and decode.
Suppose that a codeword w was sent out, and an
error e was added to it during transmission.
Hence the vector u w e was received.
To recover the codeword w, we construct a parity
check matrix G such that Gu 0 if and only if u
is a codeword.
noisy environment
11
Advantages of Linear Codes - easy to encode
and decode.
Suppose that a codeword w was sent out, and an
error e was added to it during transmission.
Hence the vector u w e was received.
To recover the codeword w, we construct a parity
check matrix G such that Gu 0 if and only if u
is a codeword.
If Gu ? 0 , then u is not a codeword and must be
of the form u w
e where w is a codeword and e is an error.
12
Advantages of Linear Codes - easy to encode
and decode.
If Gu ? 0 , then u is not a codeword and must be
of the form u w
e where w is a codeword and e is an error.
In this case Gu Gw Ge Ge
Unfortunately G is not invertible, and we
cannot use the inverse of G to recover e.
In other words, there are many different possible
es that gives the same result. However, if we
use the nearest neighbor decoding scheme, we can
choose the smallest error e in the collection.
Once this e is found by looking up a table,
we decode w u e
13
Hamming Codes
The earliest (and simplest) solution was
given by Richard W. Hamming who was born in
Chicago on Feb 11, 1915. He graduated from the U
of Chicago with a B.S. degree in mathematics. In
1939, he received an M.A. degree in mathematics
from the U of Nebraska, and in 1942, a Ph.D. in
mathematics from the U of Illinois. During
the latter part of WWII, Hamming was at Los
Alamos, where he was involved in computing atomic
bomb designs. In 1946, he joined Bell Lab where
he worked in math, computing, engineering, and
science. When Hamming arrived at Bell Lab,
the Model V computer there had over 9000 relays
and over 50 pieces of Teletype apparatus. It
occupied about 1000 sq ft of floor space and
weighed about 10 tons. (In computing power, it
equaled some of todays hand-held calculators.)
14
Hamming Codes
The input was entered into the machine via
a punched paper tape, which had two holes per
row. Each row was read as a unit. The sensing
relays would prevent further computation if more
or less than two holds were detected in a given
row. Similar checks were used in nearly every
step of a computation. If such a check
failed when the operating personnel were not
present, the problem had to be rerun. This
inefficiency led Hamming to investigate the
possibility of automatic error correction. Many
years later he said to an interviewer
Two weekends in a row I came in and found that
all my stuff had been dumped and nothing was
done. I was really aroused and annoyed because I
wanted those answers and two weekends had been
lost. And so I said, Damn it, if the machine can
detect an error, why cant it locate the position
and correct it?
15
Hamming Codes
In 1950, Hamming published his famous paper
on error-correcting codes, resulting in the use
of Hamming codes in modern computers and a new
branch of information theory. Hamming died at the
age of 82 in 1998.
Source Adapted from T. Thompson,
From Error Correcting Codes Through Sphere
Packing to Simple Groups.
16
Hamming Codes
  • The Hamming 7, 4, 3-code is a linear code in
    which
  • Every codeword has length 7
  • Only the first 4 digits rep. original signal
    (remaining 3 are check digits)
  • It can correct only one error within the
    codewordand can detect up to 2 errors.

The transmission rate in this case is 4/7 57,
the error correction rate is 1/7 14
We also have Hamming 15, 11, 3-codes and 31,
26, 3-codes etc.(the 3 means that 2 two
different codewords are at least 3 bits apart)
17
Reed-Muller Codes
  • The R(5) Reed-Muller code was used by Mariner 9
    to
  • transmit black and white photos of Mars in 1972.
  • It is a 32,6,16 linear code.
  • Every codeword has length 32 bits
  • Only 6 bits are original information
  • Can correct up to 7 bits of error in a codeword.
  • Since only 6 bits are useful, it can only
    describe 26 64
  • different shades of gray.
  • Transmission rate is 6/32 18.8,
  • error correction rate is
    7/32 22

18
Golay Codes
In the period from 1979 through 1981, the Voyager
spacecraft took color photographs of Jupiter and
Saturn. Each pixel required a 12-bit string to
represent the 4096 possible shades of color. The
source information was encoded using a particular
binary 24,12,8-code, known as the Golay code
(introduced in 1948). Each codeword has 24 bits,
with 12 bits of information, and can correct up
to 3 errors in each codeword. Transmission rate
is 12/24 50, error correction rate is 3/24
12.5
19
Golay Codes
  • Golay codes are linear block-based error
    correcting codes with a wide range of
    applications but particularly suitable for
    applications where short code word length and low
    latency is important such as
  • Real-time audio and video communications
  • Packet data communications
  • Mobile and personal communications
  • Radio communications.

20
Reed-Solomon Codes
  • Not really a binary code because it works in a
    finite field
  • of large size such as GF(256). However, we can
    use it to
  • produce a binary code by transforming each
    element in
  • the field to a binary string.
  • The commercially used version for CDs, DVDs
  • cellphones etc. is the 255,223,33-code, in
    which
  • Every codeword is a 255-byte string, hence 2040
    bits.
  • In each codeword, 223 bytes are from the original
    message (others are check digits)
  • It can correct up to 16 incorrect bytes (i.e. 16
    bits in 16 bytes in the worst case and 168 bits
    in the best case)

21
Reed-Solomon Codes
  • Reed-Solomon codes are block-based error
    correcting
  • codes with a wide range of applications
    including
  • Error correction for storage devices (e.g.
    Compact Disk, DVD, etc) and for barcodes
  • Mobile and personal communications
  • Digital communications in noise-prone
    environments
  • For the 255,223,33-code, the transmission rate
    is 223/255 87.45, error correction rate is
    6.27
  • It is however slower to code and decode than the
    Golay codes.

http//www.4i2i.com/reed_solomon_codes.htm
Write a Comment
User Comments (0)
About PowerShow.com