Chapter 6. Arithmetic - PowerPoint PPT Presentation

1 / 74
About This Presentation
Title:

Chapter 6. Arithmetic

Description:

Booth Algorithm Since 0011110 = 0100000 0000010, if we use the expression to the right, what will happen? Booth Algorithm In general, in the Booth scheme, ... – PowerPoint PPT presentation

Number of Views:321
Avg rating:3.0/5.0
Slides: 75
Provided by: psutEduJ
Category:

less

Transcript and Presenter's Notes

Title: Chapter 6. Arithmetic


1
Chapter 6. Arithmetic
2
Outline
  • A basic operation in all digital computers is the
    addition or subtraction of two numbers.
  • ALU AND, OR, NOT, XOR
  • Unsigned/signed numbers
  • Addition/subtraction
  • Multiplication
  • Division
  • Floating number operation

3
Adders
4
Addition of Unsigned Numbers Half Adder
5
Addition and Subtraction of Signed Numbers
x
y
Carry-in
c
Sum
s
Carry-out
c
i
i
i
i
i
1
0
0
0
0
0
0
0
1
1
0
0
1
1
0
0
1
1
0
1
0
0
0
1
0
1
1
0
0
1
1
1
1
0
0
1
1
1
1
1
1
s
x
y
c
x
y
c
x
y
c
x
y
c




x
y
c
Å
Å

i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
y
c
x
c
x
y
c



i
i
i
i
i
i
i
1
E
xample
x
7
1
0
1
1
X
Carry-out
Carry-in
i
y

Y


0
1
1
0
6

0
1
1
0
0
i
c
c
i
1
i
13
1
1
1
0
Z
s
i
Legend for stage
i
Figure 6.1. Logic specification for a stage of
binary addition.
6
Addition and Subtraction of Signed Numbers
  • A full adder (FA)

y
i
c
i
x
i
x
i
c
y
s
c
i
i
i
1

i
c
i
x
y
x
i
i
i
y
i
Full adder
c
c
i
i
1

(F
A)
s
i
(a) Logic f
or a single stage
7
Addition and Subtraction of Signed Numbers
  • n-bit ripple-carry adder
  • Overflow?

y
x
y
x
y
x
1
1
0
0
n
1
-
n
1
-
c
c
1
n
1
-
c
c
F
A
F
A
F
A
n
0
s
s
s
1
0
n
1
-
Most significant bit
Least significant bit
(MSB) position
(LSB) position
(b) An
n
-bit r
ipple-carr
y adder
8
Addition and Subtraction of Signed Numbers
  • kn-bit ripple-carry adder

y
x
y
x
x
y
y
x
y
x
n
n
0
n
1
-
0
n
1
-
2
n
1
-
2
n
1
-
k
n
1
-
k
n
1
-
c
n
n
-
bit
n
-
bit
n
-
bit
c
c
0
adder
adder
adder
k
n
s
s
s
s
s
s
(
)
n
0
k
1
-
n
n
1
-
2
n
1
-
k
n
1
-
(c) Cascade of k n-bit adders
Figure 6.2. Logic for addition of binary vectors.
9
Addition and Subtraction of Signed Numbers
  • Addition/subtraction logic unit

10
Make Addition Faster
11
Ripple-Carry Adder (RCA)
  • Straight-forward design
  • Simple circuit structure
  • Easy to understand
  • Most power efficient
  • Slowest (too long critical path)

12
Adders
  • We can view addition in terms of generate, Gi,
    and propagate, Pi.

13
Carry-lookahead Logic
Carry Generate Gi Ai Bi must
generate carry when A B 1 Carry Propagate Pi
Ai xor Bi carry-in will equal carry-out
here
Sum and Carry can be reexpressed in terms of
generate/propagate/Ci
Si Ai xor Bi xor Ci Pi xor Ci Ci1 Ai Bi
Ai Ci Bi Ci Ai Bi Ci (Ai Bi)
Ai Bi Ci (Ai xor Bi) Gi Ci
Pi
14
Carry-lookahead Logic
Reexpress the carry logic as follows
C1 G0 P0 C0 C2 G1 P1 C1 G1 P1 G0
P1 P0 C0 C3 G2 P2 C2 G2 P2 G1 P2 P1 G0
P2 P1 P0 C0 C4 G3 P3 C3 G3 P3 G2 P3
P2 G1 P3 P2 P1 G0 P3 P2 P1 P0 C0
Each of the carry equations can be implemented in
a two-level logic network Variables are
the adder inputs and carry in to stage 0!
15
Carry-lookahead Implementation
Adder with Propagate and Generate Outputs
Increasingly complex logic
16
Carry-lookahead Logic
Cascaded Carry Lookahead
Carry lookahead logic generates individual carries
sums computed much faster
17
Carry-lookahead Logic
18
Carry-lookahead Logic
4 bit adders with internal carry lookahead second
level carry lookahead unit, extends lookahead to
16 bits Group Propagate P P3 P2 P1 P0 Group
Generate G G3 G2P3 G1P3P2 G0P3P2P1
19
Unsigned Multiplication
20
Manual Multiplication Algorithm
21
Array Multiplication
P
artial product
(PP0)
PP1
PP2
PP3
p
,
p
, ...
p
PP4
Product
7
6
0
22
(No Transcript)
23
Another Version of 44 Array Multiplier
24
Array Multiplication
  • What is the critical path (worst case signal
    propagation delay path)?
  • Assuming that there are two gate delays from the
    inputs to the outputs of a full adder block, the
    path has a total of 6(n-1)-1 gate delays,
    including the initial AND gate delay in all
    cells, for the n?n array.
  • Any advantages/disadvantages?

25
Sequential Circuit Binary Multiplier
Register A (initially 0)
M
1 1 0 1
Shift right
Initial configuration
0
0 0 0 0
1 0 1 1
q
a
a
q
C
n
1
-
0
C
Q
A
n
1
-
0
Multiplier Q
1 0 1 1
0
1 1 0 1
Add
First cycle
Shift
1 1 0 1
0
0 1 1 0
Add/Noadd
control
1 1 0 1
1
0 0 1 1
Add
Second cycle
Shift
1 1 1 0
0
1 0 0 1
n
-bit
No add
1 1 1 0
0
1 0 0 1
adder
Third cycle
Shift
1 1 1 1
0
0 1 0 0
Control
MUX
sequencer
1 1 1 1
1
0 0 0 1
Add
Fourth cycle
Shift
1 1 1 1
0
1 0 0 0
0
0
m
m
n
1
-
0
Product
Multiplicand M
(b) Multiplication example
(a) Register configuration
26
Signed Multiplication
27
Signed Multiplication
  • Considering 2s-complement signed operands, what
    will happen to (-13)?(11) if following the same
    method of unsigned multiplication?

1
1
1
0
0
13
-
(
)
(
)
0
1
1
0
1
11

1
1
1
1
1
1
0
0
1
1
1
1
0
0
1
1
1
1
1
Sign extension is
0
0
0
0
0
0
0
0
shown in blue
1
1
0
0
1
1
1
0
0
0
0
0
0
1
0
0
0
1
1
1
0
1
1
143
-
(
)
Figure 6.8. Sign extension of negative
multiplicand.
28
Signed Multiplication
  • For a negative multiplier, a straightforward
    solution is to form the 2s-complement of both
    the multiplier and the multiplicand and proceed
    as in the case of a positive multiplier.
  • This is possible because complementation of both
    operands does not change the value or the sign of
    the product.
  • A technique that works equally well for both
    negative and positive multipliers Booth
    algorithm.

29
Booth Algorithm
  • Consider in a multiplication, the multiplier is
    positive 0011110, how many appropriately shifted
    versions of the multiplicand are added in a
    standard procedure?

0
1
0
1
1
0
1
0
0
0
1

1

1

1

0
0
0
0
0
0
0
1
0
1
1
0
1
0
1
0
1
1
0
1
0
1
0
1
1
0
1
0
1
0
1
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
1
0
1
0
1
0
0
0
30
Booth Algorithm
  • Since 0011110 0100000 0000010, if we use the
    expression to the right, what will happen?

0
1
0
1
1
1
0
1

1
-
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2's complement of
1
1
1
1
1
1
1
1
0
1
0
0
1
the multiplicand
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
1
1
0
1
0
0
0
0
0
0
0
0
0
1
1
0
0
0
1
0
0
1
0
0
0
1
31
Booth Algorithm
  • In general, in the Booth scheme, -1 times the
    shifted multiplicand is selected when moving from
    0 to 1, and 1 times the shifted multiplicand is
    selected when moving from 1 to 0, as the
    multiplier is scanned from right to left.

0
0
1
1
0
1
0
1
1
1
0
0
1
1
0
1
0
0
0
0
0
0
0
0
0
0
1

1
-
1
-
1

1
-
1

1
-
1

1
-
1

Figure 6.10. Booth recoding of a multiplier.
32
Booth Algorithm
0
1
1
0
1
0
1
1
0
1
13

(
)
0
0
1
1
0
1
0
1
1
-
1
-
6
-
(
)

0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
0
1
1
0
0
0
0
1
1
0
1
1
1
0
0
1
1
1
0
0
0
0
0
0
0
1
0
0
0
1
1
1
1
1
78
-
(
)
Figure 6.11. Booth multiplication with a negative
multiplier.
33
Booth Algorithm
Multiplier
V
ersion of multiplicand
selected by bit
i
Bit
i
Bit
i
-
1

0
0
0
M

1
0
1

M

0
1
1
M
?
1
1

0
M
Figure 6.12. Booth multiplier recoding table.
34
Booth Algorithm
  • Best case a long string of 1s (skipping over
    1s)
  • Worst case 0s and 1s are alternating

0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
Worst-case
multiplier
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1

1

1

1

1

1

1

1

1
0
0
1
1
1
1
0
1
1
0
1
0
0
0
1
Ordinary
multiplier
0
0
0
0
0
0
0
0
0
1
-
1
-
1
-
1
-
1

1

1

1
1
1
0
0
0
0
1
1
1
1
1
0
0
0
0
Good
multiplier
0
0
0
0
0
0
0
0
0
0
0
0
1
-
1
-
1

1

35
Fast Multiplication
36
Bit-Pair Recoding of Multipliers
  • Bit-pair recoding halves the maximum number of
    summands (versions of the multiplicand).

Sign extension
Implied 0 to right of LSB
0
1
1
0
1
0
1
?
1

1
0
0
0
1
?
?
?
2
1
0
(a) Example of bit-pair recoding derived from
Booth recoding
37
Bit-Pair Recoding of Multipliers
Multiplier bit-pair
Multiplicand
Multiplier bit on the right
selected at position
i
i
1

?
i
1
i
0
0
0
0
M
1
0
0
1

M
0
0
1
1

M
1
0
1
2

M
?
0
1
0
2
M
?
1
1
0
1
M
?
0
1
1
1
M
1
1
1
0
M
(b) Table of multiplicand selection decisions
38
Bit-Pair Recoding of Multipliers
1
1
0
0
1
1
-
0
0
1
-
1

0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
0
1
0
1
0
0
0
0
1
1
1
0
1
1
1
1
1
0
0
0
0
0
0
0
0
0
1
1
0
1
13

(
)
0
1
0
1
1
6
-
(
)

0
0
0
0
1
1
1
1
1
1
78
-
(
)
0
1
1
0
1
0
1
-
2
-
1
0
1
0
0
1
1
1
1
1
1
1
1
1
0
0
1
1
0
0
0
0
0
0
1
1
1
0
1
1
0
0
1
0
Figure 6.15. Multiplication requiring only n/2
summands.
39
Carry-Save Addition of Summands
40
Carry-Save Addition of Summands
P
artial product
(PP0)
PP1
PP2
PP3
p
,
p
, ...
p
PP4
Product
7
6
0
41
Carry-Save Addition of Summands
  • CSA speeds up the addition process.

42
Carry-Save Addition of Summands
Figure 6.16. Ripple-carry and carry-save arrays
for the multiplication operation M ? Q P for
4-bit operands.
43
Carry-Save Addition of Summands
  • The delay through the carry-save array is
    somewhat less than delay through the ripple-carry
    array. This is because the S and C vector outputs
    from each row are produced in parallel in one
    full-adder delay.
  • Consider the addition of many summands, we can
  • Group the summands in threes and perform
    carry-save addition on each of these groups in
    parallel to generate a set of S and C vectors in
    one full-adder delay
  • Group all of the S and C vectors into threes, and
    perform carry-save addition on them, generating a
    further set of S and C vectors in one more
    full-adder delay
  • Continue with this process until there are only
    two vectors remaining
  • They can be added in a RCA or CLA to produce the
    desired product

44
Carry-Save Addition of Summands
M
(45)
1
0
0
1
1
1
Q
(63)
1
1
1
1
1
1
X
A
1
0
0
1
1
1
B
1
0
0
1
1
1
C
1
0
0
1
1
1
D
1
0
0
1
1
1
E
1
0
0
1
1
1
F
1
0
0
1
1
1
(2,835)
Product
0
0
0
1
1
1
1
1
1
0
0
0
Figure 6.17. A multiplication example used to
illustrate carry-save addition as shown in Figure
6.18.
45
M
1
0
0
1
1
1
Q
1
1
1
1
1
1
x
A
1
0
0
1
1
1
B
1
0
0
1
1
1
C
1
0
0
1
1
1
S
1
1
0
0
1
0
0
1
1
C
0
0
1
1
0
1
1
0
1
D
1
0
0
1
1
1
E
1
0
0
1
1
1
F
1
0
0
1
1
1
S
1
1
0
0
1
0
0
1
2
C
0
0
1
1
0
1
1
0
2
S
1
0
0
0
0
1
1
1
1
C
0
0
1
1
1
1
0
0
1
S
1
1
0
0
0
1
1
0
2
S
1
0
0
0
1
0
1
1
1
0
1
3
C
0
0
0
1
1
0
1
0
0
0
0
3
C
0
1
1
0
1
1
0
0
2
S
1
0
0
1
0
1
1
1
0
1
0
1
4
C
0
0
0
0
0
1
0
1
0
1
0

4
Product
1
0
0
1
0
0
0
0
1
1
1
1
Figure 6.18. The multiplication example from
Figure 6.17 performed using carry-save addition.
46
Carry-Save Addition of Summands
Figure 6.19. Schematic representation of the
carry-save addition operations in Figure 6.18.
47
Carry-Save Addition of Summands
  • When the number of summands is large, the time
    saved is proportionally much greater.
  • Some omitted issues
  • Sign-extension
  • Computation width of the final CLA/RCA
  • Bit-pair recoding

48
Integer Division
49
Manual Division
21
10101
274
100010010
1101
26
14
10000
13
1101
1
1110
1101
1
Figure 6.20. Longhand division examples.
50
Longhand Division Steps
  • Position the divisor appropriately with respect
    to the dividend and performs a subtraction.
  • If the remainder is zero or positive, a quotient
    bit of 1 is determined, the remainder is extended
    by another bit of the dividend, the divisor is
    repositioned, and another subtraction is
    performed.
  • If the remainder is negative, a quotient bit of 0
    is determined, the dividend is restored by adding
    back the divisor, and the divisor is repositioned
    for another subtraction.

51
Circuit Arrangement
Shift left
q
a
a
q
a
n
1
-
0
n
n
1
-
0
Dividend Q
A
Quotient
setting
Add/Subtract
-bit
n
1

adder
Control
sequencer
m
m
0
n
1
-
0
Divisor M
Figure 6.21. Circuit arrangement for binary
division.
52
Restoring Division
  • Shift A and Q left one binary position
  • Subtract M from A, and place the answer back in A
  • If the sign of A is 1, set q0 to 0 and add M back
    to A (restore A) otherwise, set q0 to 1
  • Repeat these steps n times

53
Examples
0
1
1
1
0
0
0
1
1
1
1
0
1
0
0
1
Initially
0
0
0
0
0
0
1
1
0
0
0
0
0
Shift
1
0
0
0
0
0
Subtract
1
0
1
1
1
First cycle
q
Set
0
1
1
1
1
0
Restore
1
1
0
0
0
1
0
0
0
0
0
0
0
Shift
0
1
0
0
0
0
Subtract
1
0
1
1
1
Second cycle
1
1
1
1
1
q
Set
0
Restore
1
1
0
0
0
1
0
0
0
0
0
0
0
Shift
1
0
0
0
0
0
1
0
1
1
1
Subtract
Third cycle
q
Set
1
0
0
0
0
0
1
0
0
Shift
1
0
0
0
0
0
0
0
1
0
1
1
1
Subtract
1
1
1
1
1
1
q
Set
0
Fourth cycle
1
1
Restore
1
0
0
0
0
0
0
0
1
Quotient
Remainder
Figure 6.22. A restoring-division example.
54
Nonrestoring Division
  • Avoid the need for restoring A after an
    unsuccessful subtraction.
  • Any idea?
  • Step 1 (Repeat n times)
  • If the sign of A is 0, shift A and Q left one bit
    position and subtract M from A otherwise, shift
    A and Q left and add M to A.
  • Now, if the sign of A is 0, set q0 to 1
    otherwise, set q0 to 0.
  • Step2 If the sign of A is 1, add M to A

55
Examples
Initially
0
0
0
0
0
1
0
0
0
0
0
0
1
1
0
0
0
0
1
0
0
0
Shift
First cycle
1
1
1
0
1
Subtract
q
1
1
1
1
0
0
0
0
0
Set
0
1
1
1
0
0
0
0
0
Shift
0
0
0
1
1
Add
Second cycle
q
Set
0
0
0
0
1
1
1
1
1
0
Shift
0
0
0
0
1
1
1
1
1
1
0
0
0
Add
Third cycle
q
Set
0
0
0
1
1
0
0
0
0
0
0
0
1
0
0
0
0
1
Shift
1
1
1
0
1
Subtract
Fourth cycle
q
Set
0
0
1
0
1
1
1
1
1
0
Quotient
Add
1
1
1
1
1
0
0
0
1
1
Restore remainder
0
0
0
0
1
Remainder
Figure 6.23. A nonrestoring-division example.
56
Floating-Point Numbers and Operations
57
Floating-Point Numbers
  • So far we have dealt with fixed-point numbers
    (what is it?), and have considered them as
    integers.
  • Floating-point numbers the binary point is just
    to the right of the sign bit.
  • Where the range of F is
  • The position of the binary point is variable and
    is automatically adjusted as computation proceeds.

58
Floating-Point Numbers
  • What are needed to represent a floating-point
    decimal number?
  • Sign
  • Mantissa (the significant digits)
  • Exponent to an implied base (scale factor)
  • Normalized the decimal point is placed to the
    right of the first (nonzero) significant digit.

59
IEEE Standard for Floating-Point Numbers
  • Think about this number (all digits are decimal)
    X1.X2X3X4X5X6X710Y1Y2
  • It is possible to approximate this mantissa
    precision and scale factor range in a binary
    representation that occupies 32 bits 24-bit
    mantissa (1 sign bit for signed number), 8-bit
    exponent.
  • Instead of the signed exponent, E, the value
    actually stored in the exponent field is an
    unsigned integer EE127, so called excess-127
    format

60
IEEE Standard
32 bits
S
M
E

Sign of
23-bit
8-bit signed
number
mantissa fraction
exponent in

0 signifies
excess-127
-
1 signifies
representation
E

127
-
Value represented
1.
M
2



(a) Single precision
.
.
.
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
-
¼
87
Value represented
1.001010
0
2


(101000)24010, 40-127-87
(b) Example of a single-precision number
64 bits
S
M
E

Sign
52-bit
11-bit excess-1023
mantissa fraction
exponent
E

1023
-
Value represented
1.
M
2



(c) Double precision
Figure 6.24. IEEE standard floating-point formats.
61
IEEE Standard
  • For excess-127 format, 0 E 255. However, 0
    and 255 are used to represent special value. So
    actually 1 E 254. That means -126 E 127.
  • Single precision uses 32-bit. The value range is
    from 2-126 to 2127.
  • Double precision used 64-bit. The value range is
    from 2-1022 to 21023.

62
Two Aspects
  • If a number is not normalized, it can always be
    put in normalized form by shifting the fraction
    and adjusting the exponent.

excess-127 exponent
1
0
0
0
1
0
0
0
0
0
1
0
1
1
0
...
0
(There is no implicit 1 to the left of the binary
point.)
(100001000)213610, 136-127-9
9
Value represented
0.0010110
¼
2



(a) Unnormalized value
0
1
1
0
0
1
0
0
0
0
1
0
1
...
6
6127133. 13310, (100000101)2
Value represented
1.0110
¼
2



(b) Normalized version
Figure 6.25. Floating-point normalization in IEEE
single-precision format.
63
Two Aspects
  • As computations proceed, a number that does not
    fall in the representable range of normal numbers
    might be generated.
  • It requires an exponent less than -126
    (underflow) or greater than 127 (overflow). Both
    are exceptions that need to be considered.

64
Special Values
  • The end value 0 and 255 are used to represent
    special values.
  • When E0 and M0, the value exact 0 is
    represented. (0)
  • When E255 and M0, the value 8 is represented.
    ( 8)
  • When E0 and M?0, denormal numbers are
    represented. The value is 0.M?2-126.
  • When E255 and M?0, Not a Number (NaN).

65
Exceptions
  • A processor must set exception flags if any of
    the following occur in performing operations
    underflow, overflow, divide by zero, inexact,
    invalid.
  • When exception occurs, the results are set to
    special values.

66
Arithmetic Operations on Floating-Point Numbers
  • Add/Subtract rule
  • Choose the number with the smaller exponent and
    shift its mantissa right a number of steps equal
    to the difference in exponents.
  • Set the exponent of the result equal to the
    larger exponent.
  • Perform addition/subtraction on the mantissas and
    determine the sign of the result.
  • Normalize the resulting value, if necessary.
  • Multiply rule
  • Add the exponents and subtract 127.
  • Multiply the mantissas and determine the sign of
    the result.
  • Normalize the resulting value, if necessary.
  • Divide rule
  • Subtract the exponents and add 127.
  • Divide the mantissas and determine the sign of
    the result.
  • Normalize the resulting value, if necessary.

67
Guard Bits and Truncation
  • During the intermediate steps, it is important to
    retain extra bits, often called guard bits, to
    yield the maximum accuracy in the final results.
  • Removing the guard bits in generating a final
    result requires truncation of the extended
    mantissa how?

68
Guard Bits and Truncation
  • Chopping biased, 0 to 1 at LSB.
  • Von Neumann Rounding (any of the bits to be
    removed are 1, the LSB of the retained bits is
    set to 1) unbiased, -1 to 1 at LSB.
  • Why unbiased rounding is better for the cases
    that many operands are involved?
  • Rounding (A 1 is added to the LSB position of the
    bits to be retained if there is a 1 in the MSB
    position of the bits being removed) unbiased,
    -½ to ½ at LSB.
  • Round to the nearest number or nearest even
    number in case of a tie (0.b-1b-20000 -
    0.b-1b-20, 0.b-1b-21100 - 0.b-1b-210.001)
  • Best accuracy
  • Most difficult to implement

0.b-1b-2b-3000 -- 0.b-1b-2b-3111?0.b-1b-2b-3
All 6-bit fractions with b-4b-5b6 not equal to
000 are truncated to 0.b-1b-21
69
Implementing Floating-Point Operations
  • Hardware/software
  • In most general-purpose processors,
    floating-point operations are available at the
    machine-instruction level, implemented in
    hardware.
  • In high-performance processors, a significant
    portion of the chip area is assigned to
    floating-point operations.
  • Addition/subtraction circuitry

70
E

E

A
B
M
M
A
B
A
S
E

M
,
,
M
of number
A
A
A
32-bit operands
with smaller
E

B
S
E

M
,
,
8-bit
SWAP
B
B
B
subtractor
M
of number

with larger
E
SHIFTER
sign
n
bits
S
S
A
B
n
E

E

-

to right
A
B
Add /
Subtract
Combinational
Add/Sub
Mantissa
CONTROL
adder/subtractor
network
Sign
E

E

A
B
Magnitude
M
Leading zeros
detector
MUX
X
Normalize and
E

round
8-bit
subtractor
E

X
-
32-bit
result
S
E

M
R

R
R
R
R
A
B


Figure 6.26. Floating-point addition-subtraction
unit.
71
Requirements for Homework6
  • 5.6. (a) 3 credits
  • 5.6. (b)
  • Draw a figure to show how program words are
    mapped on the cache blocks 4
  • sequence of reads from the main memory blocks
    into cache blocks4
  • total time for reading the blocks from the main
    memory into the cache4
  • Executing the program out of the cache
  • Outer loop excluding Inner loop4
  • Inner loop4
  • End section of program4
  • Total execution time3
  • Due time class on Oct. 18

72
Hints for Homework6
  • Assume that consecutive addresses refer to
    consecutive words. The cycle time is for one word
  • Assume this problem does not use load-through,
    which means when a read miss occurs, the block of
    words that contains the requested word is copied
    from the main MEM into the cache, after the
    entire block is loaded into the cache, the
    particular word requested is forwarded to the
    processor
  • Total time for reading the blocks from the main
    memory into the cache the number of readsx128x10
  • Executing the program out of the cache
  • MEM word size for instructionsxloopNumx1
  • Outer loop excluding Inner loop (outer loop word
    size-inner loop word size)x10x1
  • Inner loop inner loop word sizex20x10x1
  • MEM word size from MEM 23 to 1200 is 1200-22
  • MEM word size from MEM 1200 to 1500(end) is
    1500-1200

73
Homework 7
  1. Addition and Subtraction of Signed Numbers 5-9,
    Oct. 20 (Barret, Felix, Washington)
  2. Carry-lookahead Addition 11-18, Oct. 20 (Kyle
    White, Jose Jo)
  3. Unsigned Multiplication 20-25, Oct. 20 (Tannet
    Garrett, Garth Gergerich, Gabriel Graderson)
  4. Signed Multiplication 26-28 (Shen)
  5. Booth Alg. 29-34, Oct. 25 (Ashraf Hajiyer)
  6. Fast Multiplication
  7. Bit-Pair Recoding of Multipliers 36-38, Oct.
    25(Alex, Suzanne, Scott)
  8. Carry-Save Addition of Summands 39-47, Oct. 25
    (Jason, Jordan, Chris)
  9. Integer Division
  10. Restoring Division 49-52, Oct. . 27 (Kyle,
    Brandan, Alex Shipman)
  11. Nonrestoring Division 53-55, Oct. 27 (Zach, Eric,
    Chase)

Each presentation is limited to 15 minutes
including 2 minutes for questions
74
Exercise for Oct.23
  • Read Booths algorithm and Bit-Pair Recoding in
    the textbook(6.4.1 6.5.1)
  • Calculate 2s complement multiplication (4)(-7)
    using Booths algorithm and Bit-Pair Recoding.
    (Booths algorithm and Bit-Pair Recoding will be
    introduced on Oct.25)
  • You dont need to hand in this exercise
Write a Comment
User Comments (0)
About PowerShow.com