Addition Circuits (Part I) - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Addition Circuits (Part I)

Description:

ie, r = 5 and thus the radix is fixed. ... ( Fixed Radix, r = 2 with digit set {0,1} ... In this case we use the fixed radix method with a 'radix point' ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 15
Provided by: lakshm1
Category:

less

Transcript and Presenter's Notes

Title: Addition Circuits (Part I)


1
Addition Circuits(Part I)
  • Lecture 08

2
Number Representations
A Number is an abstract concept. The number
exists in your head. The images that we see on
paper are NOT the numbers but are rather a
representation of the numbers.
3
Positional Systems
The Digit Set is chosen to contain consecutive
numbers.
4
Fixed Radix Positional System
Each position is weighted by consecutive powers
of the Radix. ie, r 5 and thus the radix is
fixed.
If the Digit Set contains exactly r consecutive
elements then the system is called non-redundant.
If the Digit Set contains more than r consecutive
elements then the system is called redundant.
A number when represented in a redundant system
may have more than one representation.
5
Adding Unsigned Integers
6
Bit Serial Adders
The Full Adder has a time delay, tFA for signals
to propagate from its inputs to its outputs.
The Carry Storage flip-flop has a setup time,
ts. We cannot clock this circuit faster than (
tFA ts ) We need k cycles to add together k-bit
numbers and thus the total time for a complete
addition is proportional to the number of bits in
the operands.
7
Ripple Adders
The bit-serial adder performs a sequential
computation. We can improve the speed by
performing the steps in parallel. This increases
the size of the circuit but size is of lesser
importance that speed.
The combinational circuit in between the two
registers requires time for the sum bits to
settle to their correct values. This time depends
upon the actual value of the operands however if
we clock the registers at a fixed rate we must
assume the worst-case time delay.
8
Signed Integers
How do you represent positive and negative
integers?
  • There are 4 common methods.
  • Sign/Magnitude Representation
  • Complement Representation
  • Biased Representation

Sign/Magnitude representations in binary are
simple. The MSB represents the sign of the
number, 1 means negative and 0 means positive.
The remaining bits are interpreted as unsigned
binary. ( Fixed Radix, r 2 with digit set 0,1
)
Biased representations allow coding of negative
numbers by adding a constant called the BIAS and
then coding the positive result as though it were
unsigned binary.
Ex excess-13 code the number (-10) would be
coded as (-1013) (0011)2 excess-128 code the
number (-47) would be coded as (-47128)
(001010001)2
9
Complement Representations
Complement representations are characterized by a
large number called a Complementation Constant,
M. The negative numbers, x, are converted to
positive numbers by calculating M-x. The positive
numbers are not changed.
Suppose you wish to code the numbers ranging from
-N, P. We need to find the smallest allowable
complementation constant. We need M-N
P1. Note -200, 799 ?0,999 and -500,499
?0,999 both with M 1000.
There are some special choices for the
complementation constant. Suppose the final
unsigned number has k-digits, we have the case M
rk. Ex consider 4-digit base 10 numbers. The
unsigned codes range from 0,9999. We can code
the range -5000,4999 by choosing M 104. (note
we also have N P1)
In binary this case translates to 2s-complement.
A k-bit number will have 2k different patterns.
Half of these patterns represent positive numbers
and the other half represent negative numbers
(zero is considered positive). We can interpret
this method as though the MSB has a negative
weight.
bk-1(-1)(2)k-1 bk-22k-2 b121 b020 11001
-16 8 1 -7
10
Rational Fractions
We can code numbers that have fractional parts.
In this case we use the fixed radix method with a
radix point. We continue writing digits to the
right of the radix point and interpret these
digits as being weighted with negative powers of
the radix (fractions)
In binary this translates to the fact that
positions to the right of the binary-point are
weighted by ½, ¼,
There are infinitely many rational numbers
between 0,1. Clearly we cannot code for all of
these numbers if we have a finite number of bits.
(How do you write 1/3 in base 10?)
1101.1011 8 4 1 ½ 1/8 1/16
The signals in electronic circuits can only hold
the values 1 and 0. The signals cannot hold a
symbol for the binary-point. It is the
responsibility of the circuit designer to know
the location of the binary-point and design the
circuit accordingly.
11
Carry Chains
Returning to circuits that add numbers. These
circuits form the basis for most calculations and
thus it is necessary that they perform as fast as
possible.
Analysis of the previous circuits show that the
main issue of fast addition involves studying the
propagation of the carry digits. The leftmost sum
digit cannot be calculated until the previous
carry is calculated. This in turn requires
knowledge of the previous carry, etc
  • Let us define some new symbols.
  • Generate gi 1 iff xi yi ? r
  • Propagate pi 1 iff xi yi r - 1
  • Annihilate ai 1 iff xi yi lt r - 1

The digits of the two operands are known at the
beginning of the calculation. We can calculate
gi, pi and ai immediately for each
position. Given a pair of operands, we can find
positions that generate a carry, propagate it to
the left until it is finally annihilated. These
are called carry-chains
12
Carry Chains
Imagine every possible pair of 16-bit numbers
that can be added together. There are 232 such
pairs. Which pairs produce the longest
carry-chain? Which pairs have no carry-chains?
If we consider the carry-in to be a generate and
the carry-out to be an annihilate, the longest
chain occurs when all bit positions form
propagates. There are 216 such pairs. Notice that
a long chain is not that probable.
We can improve the statistical performance of
Adders by designing circuits that monitor the
carry chains and detect when the chains have
completed their propagation. These circuits are
only useful in Asynchronous circuits.
13
Carry Networks
Let us consider the ripple adder in detail.
The top section of the sub-circuit computes the
signals Generate and Propagate directly from the
operands.
The bottom sub-circuit computes the Sum bits from
Propagate and the Carries.
The middle sub-circuit is the Carry Network. This
sub-circuit computes the Carries from the signals
Generate and Propagate. We can clearly see that
this is a multilevel AND/OR network and thus is
not very fast.
14
Carry-Select Networks
  • The most important concept underlying techniques
    for improving computational speed involves trying
    to compute sections simultaneously. This is known
    as parallel computation. There are two basic ways
    to view parallel computation.
  • Separate the data into sections and
    simultaneously process the sections.
  • Separate the process into independent sections
    and compute these sections simultaneously.
Write a Comment
User Comments (0)
About PowerShow.com