Title: Code and Decoder Design of LDPC Codes for Gbps Systems
1Code and Decoder Design of LDPC Codes for Gbps
Systems
- Jeremy Thorpe
- Presented to Microsoft Research 2002.11.25
2Talk Overview
- Introduction (13 slides)
- Wiring Complexity ( 9 slides)
- Logic Complexity (7 slides)
3Reliable Communication over Unreliable Channels
- Channel is the means by which information is
communicated from sender to receiver - Sender chooses X
- Channel generates Y from conditional probability
distribution P(YX) - Receiver observes Y
P(YX)
Y
X
channel
4Shannons Channel Coding Theorem
- Using the channel n times, we can communicate k
bits of information with probability of error as
small as we like as long as - as long as n is large enough. C is a number that
characterizes any channel. - The same is impossible if RgtC.
5The Coding Strategy
- Encoder chooses the mth codeword in codebook C
and transmits it across the channel - Decoder observes the channel output y and
generates m based on the knowledge of the
codebook C and the channel statistics.
Encoder
Channel
Decoder
6Linear Codes
- A linear code C can be defined in terms of either
a generator matrix or parity-check matrix. - Generator matrix G (kn)
- Parity-check matrix H (n-kn)
7Regular LDPC Codes
- LDPC Codes linear codes defined in terms of H.
- The number of ones in each column of H is a fixed
number ?. - The number of ones in each row of H is a fixed
number ?. - Typical parameters for Regular LDPC codes are
(?,?)(3,6).
8Graph Representation of LDPC Codes
- H is represented by a bipartite graph.
- nodes v (degree ?) on the left represent
variables. - Nodes c (degree ?)on the right represent
equations
Variable nodes
. . .
. . .
Check nodes
9Message-Passing Decoding of LDPC Codes
- Message Passing (or Belief Propagation) decoding
is a low-complexity algorithm which approximately
answers the question what is the most likely x
given y? - MP recursively defines messages mv,c(i) and
mc,v(i) from each node variable node v to each
adjacent check node c, for iteration i0,1,...
10Two Types of Messages...
- Liklihood Ratio
- For y1,...yn independent conditionally on x
- Probability Difference
- For x1,...xn independent
11...Related by the Biliniear Transform
12Variable to Check Messages
- On any iteration i, the message from v to c is
v
c
. . .
. . .
13Check to Variable Messages
- On any iteration, the message from c to v is
v
c
. . .
. . .
14Decision Rule
- After sufficiently many iterations, return the
likelihood ratio
15Theorem about MP Algorithm
- If the algorithm stops after r iterations, then
the algorithm returns the maximum a posteriori
probability estimate of xv given y within radius
r of v. - However, the variables within a radius r of v
must be dependent only by the equations within
radius r of v,
r
...
v
...
...
16Wiring Complexity
17Physical Implementation (VLSI)
- We have seen that the MP decoding algorithm for
LDPC codes is defined in terms of a graph - Nodes perform local computation
- Edges carry messages from v to c, and c to v
- Instantiate this graph on a chip!
- Edges ?Wires
- Nodes ?Logic units
18Complexity vs. Performance
- Longer codes provide
- More efficient use of the channel (eg. less power
used over the AWGN channel) - Faster throughput for fixed technology and
decoding parameters (number of iterations)
- Longer codes demand
- More logic resources
- Way more wiring resources
19The Wiring Problem
- The number of edges in the graph grows like the
number of nodes n. - The length of the edges in a random graph also
grows like .
A random graph
20Graph Construction?
- Idea find a construction that has low
wire-length and maintains good performance... - Drawback it is difficult to construct any graph
that has the performance of random graph.
21A Better Solution
- Use an algorithm which generates a graph at
random, but with a preference for - Short edge length
- Quantities related to code performance
22Conventional Graph Wisdom
- Short loops give rise to dependent messages
(which are assumed to be independent) after a
small number of iterations, and should be
avoided.
23Simulated Annealing!
- Simulated annealing approximately minimizes an
Energy Function over a Solution space. - Requires a good way to traverse the solution
space.
24Generating LDPC graphs with Simulated Annealing
- Define energy function with two components
- Wirelength
- Loopiness
- traverse the space by picking two edges at random
and do
25Results of Simulated Annealing
- The graph on the right has nearly identical
performance to the one shown previously
A graph generated by Simulated Annealing
26Logic Complexity
27Complexity of Classical Algorithm
- Original algorithm defines messages in terms of
arithmetic operations over real numbers - However, this implies floating-point addition,
multiplication, and even division!
28A modified Algorithm
- We define a modified algorithm in which all
messages are their logarithms in the original
scheme - The channel message ? is similarly replaced by
it's logarithm.
29Quantization
- Replaced a product by a sum, but now we have a
transcendental function f. - However, if we quantize the messages, we can
pre-compute f for all values!
30Quantized MP Performance
- The graph to the following page shows the bit
error rate for a regular (3,6) of length n10,000
code using between 2 and 4 bits of quantization. - (Some error floors predicted by density
evolution, some are not)
31(No Transcript)
32Quantization Tradeoffs
- A quantizer is characterized by its range and
granularity - For fixed channel quantization
- A finely granulated quantizer (Q1) performs well
at low SNR. - However, the quantizer must be broadened (Q2) to
avoid saturation, and resulting error floor.
Q1
Q2
33Conclusion
- There is a tradeoff between logic complexity and
performance - Nearly optimal performance (.1 dB 1.03
power) is achievable with 4-bit messages. - More work is needed to avoid error-floors due to
quantization.