Chapter Three - PowerPoint PPT Presentation

Loading...

PPT – Chapter Three PowerPoint presentation | free to download - id: 6e5be5-NDlhN



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Chapter Three

Description:

Lectures for 2nd Edition - Oregon State University ... Chapter Three – PowerPoint PPT presentation

Number of Views:14
Avg rating:3.0/5.0
Date added: 11 October 2019
Slides: 164
Provided by: TodA163
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Chapter Three


1
Chapter Three
2
Numbers
  • Bits are just bits (no inherent meaning)
    conventions define relationship between bits and
    numbers
  • Binary numbers (base 2) 0000 0001 0010 0011 0100
    0101 0110 0111 1000 1001... decimal 0...2n-1
  • Of course it gets more complicated numbers are
    finite (overflow) fractions and real
    numbers negative numbers e.g., no MIPS subi
    instruction addi can add a negative number
  • How do we represent negative numbers? i.e.,
    which bit patterns will represent which numbers?

3
Possible Representations
  • Sign Magnitude One's Complement
    Two's Complement 000 0 000 0 000
    0 001 1 001 1 001 1 010 2 010
    2 010 2 011 3 011 3 011 3 100
    -0 100 -3 100 -4 101 -1 101 -2 101
    -3 110 -2 110 -1 110 -2 111 -3 111
    -0 111 -1
  • Issues balance, number of zeros, ease of
    operations
  • Which one is best? Why?

4
MIPS
  • 32 bit signed numbers0000 0000 0000 0000 0000
    0000 0000 0000two 0ten0000 0000 0000 0000 0000
    0000 0000 0001two 1ten0000 0000 0000 0000
    0000 0000 0000 0010two 2ten...0111 1111
    1111 1111 1111 1111 1111 1110two
    2,147,483,646ten0111 1111 1111 1111 1111 1111
    1111 1111two 2,147,483,647ten1000 0000 0000
    0000 0000 0000 0000 0000two
    2,147,483,648ten1000 0000 0000 0000 0000 0000
    0000 0001two 2,147,483,647ten1000 0000 0000
    0000 0000 0000 0000 0010two
    2,147,483,646ten...1111 1111 1111 1111 1111
    1111 1111 1101two 3ten1111 1111 1111 1111
    1111 1111 1111 1110two 2ten1111 1111 1111
    1111 1111 1111 1111 1111two 1ten

5
Two's Complement Operations
  • Negating a two's complement number invert all
    bits and add 1
  • remember negate and invert are quite
    different!
  • Converting n bit numbers into numbers with more
    than n bits
  • MIPS 16 bit immediate gets converted to 32 bits
    for arithmetic
  • copy the most significant bit (the sign bit) into
    the other bits 0010 -gt 0000 0010 1010 -gt
    1111 1010
  • "sign extension" (lbu vs. lb)

6
Addition Subtraction
  • Just like in grade school (carry/borrow 1s)
    0111 0111 0110  0110 - 0110 - 0101
  • Two's complement operations easy
  • subtraction using addition of negative numbers
    0111  1010
  • Overflow (result too large for finite computer
    word)
  • e.g., adding two n-bit numbers does not yield an
    n-bit number 0111  0001 note that overflow
    term is somewhat misleading, 1000 it does not
    mean a carry overflowed

7
Detecting Overflow
  • No overflow when adding a positive and a negative
    number
  • No overflow when signs are the same for
    subtraction
  • Overflow occurs when the value affects the sign
  • overflow when adding two positives yields a
    negative
  • or, adding two negatives gives a positive
  • or, subtract a negative from a positive and get a
    negative
  • or, subtract a positive from a negative and get a
    positive

8
Effects of Overflow
  • An exception (interrupt) occurs
  • Control jumps to predefined address for exception
  • Interrupted address is saved for possible
    resumption
  • Details based on software system / language
  • example flight control vs. homework assignment
  • Don't always want to detect overflow new MIPS
    instructions addu, addiu, subu note addiu
    still sign-extends! note sltu, sltiu for
    unsigned comparisons

9
Multiplication
  • More complicated than addition
  • accomplished via shifting and addition
  • More time and more area
  • Let's look at 3 versions based on a gradeschool
    algorithm 0010 (multiplicand) __x_101
    1 (multiplier)
  • Negative numbers convert and multiply
  • there are better techniques, we wont look at them

10
Multiplication Implementation
Datapath
Control
11
Final Version
  • Multiplier starts in right half of product

What goes here?
12
Floating Point (a brief look)
  • We need a way to represent
  • numbers with fractions, e.g., 3.1416
  • very small numbers, e.g., .000000001
  • very large numbers, e.g., 3.15576 109
  • Representation
  • sign, exponent, significand (1)sign
    significand 2exponent
  • more bits for significand gives more accuracy
  • more bits for exponent increases range
  • IEEE 754 floating point standard
  • single precision 8 bit exponent, 23 bit
    significand
  • double precision 11 bit exponent, 52 bit
    significand

13
IEEE 754 floating-point standard
  • Leading 1 bit of significand is implicit
  • Exponent is biased to make sorting easier
  • all 0s is smallest exponent all 1s is largest
  • bias of 127 for single precision and 1023 for
    double precision
  • summary (1)sign (1significand)
    2exponent bias

14
Floating point addition

15
Floating Point Complexities
  • Operations are somewhat more complicated (see
    text)
  • In addition to overflow we can have underflow
  • Accuracy can be a big problem
  • IEEE 754 keeps two extra bits, guard and round
  • four rounding modes
  • positive divided by zero yields infinity
  • zero divide by zero yields not a number
  • other complexities
  • Implementing the standard can be tricky
  • Not using the standard can be even worse
  • see text for description of 80x86 and Pentium bug!

16
Chapter Three Summary
  • Computer arithmetic is constrained by limited
    precision
  • Bit patterns have no inherent meaning but
    standards do exist
  • twos complement
  • IEEE 754 floating point
  • Computer instructions determine meaning of the
    bit patterns
  • Performance and accuracy are important so there
    are many complexities in real machines
  • Algorithm choice is important and may lead to
    hardware optimizations for both space and time
    (e.g., multiplication)
  • You may want to look back (Section 3.10 is great
    reading!)

17
Lets Build a Processor
  • Almost ready to move into chapter 5 and start
    building a processor
  • First, lets review Boolean Logic and build the
    ALU well need (Material from Appendix B)

18
Review Boolean Algebra Gates
  • Problem Consider a logic function with three
    inputs A, B, and C. Output D is true if at
    least one input is true Output E is true if
    exactly two inputs are true Output F is true
    only if all three inputs are true
  • Show the truth table for these three functions.
  • Show the Boolean equations for these three
    functions.
  • Show an implementation consisting of inverters,
    AND, and OR gates.

19
An ALU (arithmetic logic unit)
  • Let's build an ALU to support the andi and ori
    instructions
  • we'll just build a 1 bit ALU, and use 32 of
    them
  • Possible Implementation (sum-of-products)

a
b
20
Review The Multiplexor
  • Selects one of the inputs to be the output,
    based on a control input
  • Lets build our ALU using a MUX

note we call this a 2-input mux even
though it has 3 inputs!
0
1
21
Different Implementations
  • Not easy to decide the best way to build
    something
  • Don't want too many inputs to a single gate
  • Dont want to have to go through too many gates
  • for our purposes, ease of comprehension is
    important
  • Let's look at a 1-bit ALU for addition
  • How could we build a 1-bit ALU for add, and, and
    or?
  • How could we build a 32-bit ALU?

cout a b a cin b cin sum a xor b xor cin
22
Lecture 5, Oct. 9, 2007
  • Class Notes
  • Homework 1 Due today, in Class
  • Homework 2 Posted on website, today at 3PM
  • Project/Reading 4 Papers to read, due next
    Tuesday
  • Preferably look at the readings for Thursday so
    we can discuss
  • Project Interconnection Network / Router Design
  • Design/improve upon router designed for
    connecting chip multi-processors

23
Lecture 5
24
Building a 32 bit ALU
25
What about subtraction (a b) ?
  • Two's complement approach just negate b and
    add.
  • How do we negate?
  • A very clever solution

26
Adding a NOR function
  • Can also choose to invert a. How do we get a
    NOR b ?

27
Tailoring the ALU to the MIPS
  • Need to support the set-on-less-than instruction
    (slt)
  • remember slt is an arithmetic instruction
  • produces a 1 if rs lt rt and 0 otherwise
  • use subtraction (a-b) lt 0 implies a lt b
  • Need to support test for equality (beq t5, t6,
    t7)
  • use subtraction (a-b) 0 implies a b

28
Supporting slt
  • Can we figure out the idea?

all other bits
Use this ALU for most significant bit
29
Supporting slt
30
Test for equality
  • Notice control lines0000 and0001 or0010
    add0110 subtract0111 slt1100 NOR
  • Note zero is a 1 when the result is zero!

31
Conclusion
  • We can build an ALU to support the MIPS
    instruction set
  • key idea use multiplexor to select the output
    we want
  • we can efficiently perform subtraction using
    twos complement
  • we can replicate a 1-bit ALU to produce a 32-bit
    ALU
  • Important points about hardware
  • all of the gates are always working
  • the speed of a gate is affected by the number of
    inputs to the gate
  • the speed of a circuit is affected by the number
    of gates in series (on the critical path or
    the deepest level of logic)
  • Our primary focus comprehension, however,
  • Clever changes to organization can improve
    performance (similar to using better algorithms
    in software)
  • We saw this in multiplication, lets look at
    addition now

32
ALU Summary
  • We can build an ALU to support MIPS addition
  • Our focus is on comprehension, not performance
  • Real processors use more sophisticated techniques
    for arithmetic
  • Where performance is not critical, hardware
    description languages allow designers to
    completely automate the creation of hardware!

33
  • DONE

34
Chapter Five
35
The Processor Datapath Control
  • We're ready to look at an implementation of the
    MIPS
  • Simplified to contain only
  • memory-reference instructions lw, sw
  • arithmetic-logical instructions add, sub, and,
    or, slt
  • control flow instructions beq, j
  • Generic Implementation
  • use the program counter (PC) to supply
    instruction address
  • get the instruction from memory
  • read registers
  • use the instruction to decide exactly what to do
  • All instructions use the ALU after reading the
    registers Why? memory-reference? arithmetic?
    control flow?

36
More Implementation Details
  • Abstract / Simplified ViewTwo types
    of functional units
  • elements that operate on data values
    (combinational)
  • elements that contain state (sequential)

37
Five Execution Steps
  • Instruction Fetch
  • Instruction Decode and Register Fetch
  • Execution, Memory Address Computation, or Branch
    Completion
  • Memory Access or R-type instruction completion
  • Write-back step INSTRUCTIONS TAKE FROM 3 - 5
    CYCLES!

38
State Elements
  • Unclocked vs. Clocked
  • Clocks used in synchronous logic
  • when should an element that contains state be
    updated?

cycle time
39
An unclocked state element
  • The set-reset latch
  • output depends on present inputs and also on past
    inputs

40
Latches and Flip-flops
  • Output is equal to the stored value inside the
    element (don't need to ask for permission to
    look at the value)
  • Change of state (value) is based on the clock
  • Latches whenever the inputs change, and the
    clock is asserted
  • Flip-flop state changes only on a clock
    edge (edge-triggered methodology)

"logically true", could mean electrically low
A clocking methodology defines when signals can
be read and written wouldn't want to read a
signal at the same time it was being written
41
D-latch
  • Two inputs
  • the data value to be stored (D)
  • the clock signal (C) indicating when to read
    store D
  • Two outputs
  • the value of the internal state (Q) and it's
    complement

42
D flip-flop
  • Output changes only on the clock edge

43
Our Implementation
  • An edge triggered methodology
  • Typical execution
  • read contents of some state elements,
  • send values through some combinational logic
  • write results to one or more state elements

44
Register File
  • Built using D flip-flops

Do you understand? What is the Mux above?
45
Abstraction
  • Make sure you understand the abstractions!
  • Sometimes it is easy to think you do, when you
    dont

46
Register File
  • Note we still use the real clock to determine
    when to write

47
Simple Implementation
  • Include the functional units we need for each
    instruction

Why do we need this stuff?
48
Building the Datapath
  • Use multiplexors to stitch them together

49
DONE
50
Control
  • Selecting the operations to perform (ALU,
    read/write, etc.)
  • Controlling the flow of data (multiplexor inputs)
  • Information comes from the 32 bits of the
    instruction
  • Example add 8, 17, 18 Instruction
    Format 000000 10001 10010 01000
    00000 100000 op rs rt rd shamt
    funct
  • ALU's operation based on instruction type and
    function code

51
Control
  • e.g., what should the ALU do with this
    instruction
  • Example lw 1, 100(2) 35 2 1
    100 op rs rt 16 bit offset
  • ALU control input 0000 AND 0001 OR 0010 add
    0110 subtract 0111 set-on-less-than 1100 NOR
  • Why is the code for subtract 0110 and not 0011?

52
Control
  • Must describe hardware to compute 4-bit ALU
    control input
  • given instruction type 00 lw, sw 01 beq,
    10 arithmetic
  • function code for arithmetic
  • Describe it using a truth table (can turn into
    gates)

53

54
Control
  • Simple combinational logic (truth tables)

55
Our Simple Control Structure
  • All of the logic is combinational
  • We wait for everything to settle down, and the
    right thing to be done
  • ALU might not produce right answer right away
  • we use write signals along with clock to
    determine when to write
  • Cycle time determined by length of the longest
    path

We are ignoring some details like setup and hold
times
56
Single Cycle Implementation
  • Calculate cycle time assuming negligible delays
    except
  • memory (200ps), ALU and adders (100ps),
    register file access (50ps)

57
Where we are headed
  • Single Cycle Problems
  • what if we had a more complicated instruction
    like floating point?
  • wasteful of area
  • One Solution
  • use a smaller cycle time
  • have different instructions take different
    numbers of cycles
  • a multicycle datapath

58
Multicycle Approach
  • We will be reusing functional units
  • ALU used to compute address and to increment PC
  • Memory used for instruction and data
  • Our control signals will not be determined
    directly by instruction
  • e.g., what should the ALU do for a subtract
    instruction?
  • Well use a finite state machine for control

59
Multicycle Approach
  • Break up the instructions into steps, each step
    takes a cycle
  • balance the amount of work to be done
  • restrict each cycle to use only one major
    functional unit
  • At the end of a cycle
  • store values for use in later cycles (easiest
    thing to do)
  • introduce additional internal registers

60
Instructions from ISA perspective
  • Consider each instruction from perspective of
    ISA.
  • Example
  • The add instruction changes a register.
  • Register specified by bits 1511 of instruction.
  • Instruction specified by the PC.
  • New value is the sum (op) of two registers.
  • Registers specified by bits 2521 and 2016 of
    the instruction RegMemoryPC1511 lt
    RegMemoryPC2521 op
    RegMemoryPC2016
  • In order to accomplish this we must break up the
    instruction. (kind of like introducing variables
    when programming)

61
Breaking down an instruction
  • ISA definition of arithmeticRegMemoryPC151
    1 lt RegMemoryPC2521 op
    RegMemoryPC2016
  • Could break down to
  • IR lt MemoryPC
  • A lt RegIR2521
  • B lt RegIR2016
  • ALUOut lt A op B
  • RegIR2016 lt ALUOut
  • We forgot an important part of the definition of
    arithmetic!
  • PC lt PC 4

62
Idea behind multicycle approach
  • We define each instruction from the ISA
    perspective (do this!)
  • Break it down into steps following our rule that
    data flows through at most one major functional
    unit (e.g., balance work across steps)
  • Introduce new registers as needed (e.g, A, B,
    ALUOut, MDR, etc.)
  • Finally try and pack as much work into each step
    (avoid unnecessary cycles)while also trying to
    share steps where possible (minimizes control,
    helps to simplify solution)
  • Result Our books multicycle Implementation!

63
Five Execution Steps
  • Instruction Fetch
  • Instruction Decode and Register Fetch
  • Execution, Memory Address Computation, or Branch
    Completion
  • Memory Access or R-type instruction completion
  • Write-back step INSTRUCTIONS TAKE FROM 3 - 5
    CYCLES!

64
Done for Real
65
Step 1 Instruction Fetch
  • Use PC to get instruction and put it in the
    Instruction Register.
  • Increment the PC by 4 and put the result back in
    the PC.
  • Can be described succinctly using RTL
    "Register-Transfer Language" IR lt
    MemoryPC PC lt PC 4Can we figure out the
    values of the control signals?What is the
    advantage of updating the PC now?

66
Step 2 Instruction Decode and Register Fetch
  • Read registers rs and rt in case we need them
  • Compute the branch address in case the
    instruction is a branch
  • RTL A lt RegIR2521 B lt
    RegIR2016 ALUOut lt PC
    (sign-extend(IR150) ltlt 2)
  • We aren't setting any control lines based on the
    instruction type (we are busy "decoding" it in
    our control logic)

67
Step 3 (instruction dependent)
  • ALU is performing one of three functions, based
    on instruction type
  • Memory Reference ALUOut lt A
    sign-extend(IR150)
  • R-type ALUOut lt A op B
  • Branch if (AB) PC lt ALUOut

68
Step 4 (R-type or memory-access)
  • Loads and stores access memory MDR lt
    MemoryALUOut or MemoryALUOut lt B
  • R-type instructions finish RegIR1511 lt
    ALUOutThe write actually takes place at the
    end of the cycle on the edge

69
Write-back step
  • RegIR2016 lt MDR
  • Which instruction needs this?

70
Summary
71
Simple Questions
  • How many cycles will it take to execute this
    code? lw t2, 0(t3) lw t3, 4(t3) beq
    t2, t3, Label assume not add t5, t2,
    t3 sw t5, 8(t3)Label ...
  • What is going on during the 8th cycle of
    execution?
  • In what cycle does the actual addition of t2 and
    t3 takes place?

72
(No Transcript)
73
Review finite state machines
  • Finite state machines
  • a set of states and
  • next state function (determined by current state
    and the input)
  • output function (determined by current state and
    possibly input)
  • Well use a Moore machine (output based only on
    current state)

74
Review finite state machines
  • Example B. 37 A friend would like you to
    build an electronic eye for use as a fake
    security device. The device consists of three
    lights lined up in a row, controlled by the
    outputs Left, Middle, and Right, which, if
    asserted, indicate that a light should be on.
    Only one light is on at a time, and the light
    moves from left to right and then from right to
    left, thus scaring away thieves who believe that
    the device is monitoring their activity. Draw
    the graphical representation for the finite state
    machine used to specify the electronic eye. Note
    that the rate of the eyes movement will be
    controlled by the clock speed (which should not
    be too great) and that there are essentially no
    inputs.

75
Implementing the Control
  • Value of control signals is dependent upon
  • what instruction is being executed
  • which step is being performed
  • Use the information weve accumulated to specify
    a finite state machine
  • specify the finite state machine graphically, or
  • use microprogramming
  • Implementation can be derived from specification

76
Graphical Specification of FSM
  • Note
  • dont care if not mentioned
  • asserted if name only
  • otherwise exact value
  • How many state bits will we need?

77
Finite State Machine for Control
  • Implementation

78
PLA Implementation
  • If I picked a horizontal or vertical line could
    you explain it?

79
ROM Implementation
  • ROM "Read Only Memory"
  • values of memory locations are fixed ahead of
    time
  • A ROM can be used to implement a truth table
  • if the address is m-bits, we can address 2m
    entries in the ROM.
  • our outputs are the bits of data that the address
    points to.m is the "height", and n is
    the "width"

0 0 0 0 0 1 1 0 0 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 1
0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 1 1 0 0 1 1
0 1 1 1 0 1 1 1
80
ROM Implementation
  • How many inputs are there? 6 bits for opcode, 4
    bits for state 10 address lines (i.e., 210
    1024 different addresses)
  • How many outputs are there? 16 datapath-control
    outputs, 4 state bits 20 outputs
  • ROM is 210 x 20 20K bits (and a rather
    unusual size)
  • Rather wasteful, since for lots of the entries,
    the outputs are the same i.e., opcode is often
    ignored

81
ROM vs PLA
  • Break up the table into two parts 4 state bits
    tell you the 16 outputs, 24 x 16 bits of
    ROM 10 bits tell you the 4 next state bits,
    210 x 4 bits of ROM Total 4.3K bits of ROM
  • PLA is much smaller can share product terms
    only need entries that produce an active
    output can take into account don't cares
  • Size is (inputs product-terms) (outputs
    product-terms) For this example
    (10x17)(20x17) 510 PLA cells
  • PLA cells usually about the size of a ROM cell
    (slightly bigger)

82
Another Implementation Style
  • Complex instructions the "next state" is often
    current state 1

83
Details
84
Microprogramming
  • What are the microinstructions ?

85
Microprogramming
  • A specification methodology
  • appropriate if hundreds of opcodes, modes,
    cycles, etc.
  • signals specified symbolically using
    microinstructions
  • Will two implementations of the same architecture
    have the same microcode?
  • What would a microassembler do?

86
Microinstruction format
87
Maximally vs. Minimally Encoded
  • No encoding
  • 1 bit for each datapath operation
  • faster, requires more memory (logic)
  • used for Vax 780 an astonishing 400K of memory!
  • Lots of encoding
  • send the microinstructions through logic to get
    control signals
  • uses less memory, slower
  • Historical context of CISC
  • Too much logic to put on a single chip with
    everything else
  • Use a ROM (or even RAM) to hold the microcode
  • Its easy to add new instructions

88
Microcode Trade-offs
  • Distinction between specification and
    implementation is sometimes blurred
  • Specification Advantages
  • Easy to design and write
  • Design architecture and microcode in parallel
  • Implementation (off-chip ROM) Advantages
  • Easy to change since values are in memory
  • Can emulate other architectures
  • Can make use of internal registers
  • Implementation Disadvantages, SLOWER now that
  • Control is implemented on same chip as processor
  • ROM is no longer faster than RAM
  • No need to go back and make changes

89
Historical Perspective
  • In the 60s and 70s microprogramming was very
    important for implementing machines
  • This led to more sophisticated ISAs and the VAX
  • In the 80s RISC processors based on pipelining
    became popular
  • Pipelining the microinstructions is also
    possible!
  • Implementations of IA-32 architecture processors
    since 486 use
  • hardwired control for simpler instructions
    (few cycles, FSM control implemented using PLA
    or random logic)
  • microcoded control for more complex
    instructions (large numbers of cycles, central
    control store)
  • The IA-64 architecture uses a RISC-style ISA and
    can be implemented without a large central
    control store

90
Pentium 4
  • Pipelining is important (last IA-32 without it
    was 80386 in 1985)
  • Pipelining is used for the simple instructions
    favored by compilersSimply put, a high
    performance implementation needs to ensure that
    the simple instructions execute quickly, and that
    the burden of the complexities of the instruction
    set penalize the complex, less frequently used,
    instructions

Chapter 7
Chapter 6
91
Pentium 4
  • Somewhere in all that control we must handle
    complex instructions
  • Processor executes simple microinstructions, 70
    bits wide (hardwired)
  • 120 control lines for integer datapath (400 for
    floating point)
  • If an instruction requires more than 4
    microinstructions to implement, control from
    microcode ROM (8000 microinstructions)
  • Its complicated!

92
Chapter 5 Summary
  • If we understand the instructions We can build
    a simple processor!
  • If instructions take different amounts of time,
    multi-cycle is better
  • Datapath implemented using
  • Combinational logic for arithmetic
  • State holding elements to remember bits
  • Control implemented using
  • Combinational logic for single-cycle
    implementation
  • Finite state machine for multi-cycle
    implementation

93
Chapter Six
94
Pipelining
  • Improve performance by increasing instruction
    throughput
  • Ideal speedup is number of stages in
    the pipeline. Do we achieve this?

Note timing assumptions changedfor this
example
95
Pipelining
  • What makes it easy
  • all instructions are the same length
  • just a few instruction formats
  • memory operands appear only in loads and stores
  • What makes it hard?
  • structural hazards suppose we had only one
    memory
  • control hazards need to worry about branch
    instructions
  • data hazards an instruction depends on a
    previous instruction
  • Well build a simple pipeline and look at these
    issues
  • Well talk about modern processors and what
    really makes it hard
  • exception handling
  • trying to improve performance with out-of-order
    execution, etc.

96
Basic Idea
  • What do we need to add to actually split the
    datapath into stages?

97
Pipelined Datapath
  • Can you find a problem even if
    there are no dependencies? What instructions
    can we execute to manifest the problem?

98
Corrected Datapath
99
Graphically Representing Pipelines
  • Can help with answering questions like
  • how many cycles does it take to execute this
    code?
  • what is the ALU doing during cycle 4?
  • use this representation to help understand
    datapaths

100
Pipeline Control
101
Pipeline control
  • We have 5 stages. What needs to be controlled in
    each stage?
  • Instruction Fetch and PC Increment
  • Instruction Decode / Register Fetch
  • Execution
  • Memory Stage
  • Write Back
  • How would control be handled in an automobile
    plant?
  • a fancy control center telling everyone what to
    do?
  • should we use a finite state machine?

102
Pipeline Control
  • Pass control signals along just like the data

103
Datapath with Control
104
Dependencies
  • Problem with starting next instruction before
    first is finished
  • dependencies that go backward in time are data
    hazards

105
Software Solution
  • Have compiler guarantee no hazards
  • Where do we insert the nops ? sub 2, 1,
    3 and 12, 2, 5 or 13, 6, 2 add 14,
    2, 2 sw 15, 100(2)
  • Problem this really slows us down!

106
Forwarding
  • Use temporary results, dont wait for them to be
    written
  • register file forwarding to handle read/write to
    same register
  • ALU forwarding

107
Forwarding
  • The main idea (some details not shown)

108
Can't always forward
  • Load word can still cause a hazard
  • an instruction tries to read a register following
    a load instruction that writes to the same
    register.
  • Thus, we need a hazard detection unit to stall
    the load instruction

109
Stalling
  • We can stall the pipeline by keeping an
    instruction in the same stage

110
Hazard Detection Unit
  • Stall by letting an instruction that wont write
    anything go forward

111
Branch Hazards
  • When we decide to branch, other instructions are
    in the pipeline!
  • We are predicting branch not taken
  • need to add hardware for flushing instructions if
    we are wrong

112
Flushing Instructions

Note weve also moved branch decision to ID
stage
113
Branches
  • If the branch is taken, we have a penalty of one
    cycle
  • For our simple design, this is reasonable
  • With deeper pipelines, penalty increases and
    static branch prediction drastically hurts
    performance
  • Solution dynamic branch prediction

A 2-bit prediction scheme
114
Branch Prediction
  • Sophisticated Techniques
  • A branch target buffer to help us look up the
    destination
  • Correlating predictors that base prediction on
    global behaviorand recently executed branches
    (e.g., prediction for a specificbranch
    instruction based on what happened in previous
    branches)
  • Tournament predictors that use different types of
    prediction strategies and keep track of which one
    is performing best.
  • A branch delay slot which the compiler tries to
    fill with a useful instruction (make the one
    cycle delay part of the ISA)
  • Branch prediction is especially important because
    it enables other more advanced pipelining
    techniques to be effective!
  • Modern processors predict correctly 95 of the
    time!

115
Improving Performance
  • Try and avoid stalls! E.g., reorder these
    instructions
  • lw t0, 0(t1)
  • lw t2, 4(t1)
  • sw t2, 0(t1)
  • sw t0, 4(t1)
  • Dynamic Pipeline Scheduling
  • Hardware chooses which instructions to execute
    next
  • Will execute instructions out of order (e.g.,
    doesnt wait for a dependency to be resolved, but
    rather keeps going!)
  • Speculates on branches and keeps the pipeline
    full (may need to rollback if prediction
    incorrect)
  • Trying to exploit instruction-level parallelism

116
Advanced Pipelining
  • Increase the depth of the pipeline
  • Start more than one instruction each cycle
    (multiple issue)
  • Loop unrolling to expose more ILP (better
    scheduling)
  • Superscalar processors
  • DEC Alpha 21264 9 stage pipeline, 6 instruction
    issue
  • All modern processors are superscalar and issue
    multiple instructions usually with some
    limitations (e.g., different pipes)
  • VLIW very long instruction word, static
    multiple issue (relies more on compiler
    technology)
  • This class has given you the background you need
    to learn more!

117
Chapter 6 Summary
  • Pipelining does not improve latency, but does
    improve throughput

118
Chapter Seven
119
Memories Review
  • SRAM
  • value is stored on a pair of inverting gates
  • very fast but takes up more space than DRAM (4 to
    6 transistors)
  • DRAM
  • value is stored as a charge on capacitor (must be
    refreshed)
  • very small but slower than SRAM (factor of 5 to
    10)

120
Exploiting Memory Hierarchy
  • Users want large and fast memories! SRAM access
    times are .5 5ns at cost of 4000 to 10,000
    per GB.DRAM access times are 50-70ns at cost of
    100 to 200 per GB.Disk access times are 5 to
    20 million ns at cost of .50 to 2 per GB.
  • Try and give it to them anyway
  • build a memory hierarchy

2004
121
Locality
  • A principle that makes having a memory hierarchy
    a good idea
  • If an item is referenced,temporal locality it
    will tend to be referenced again soon
  • spatial locality nearby items will tend to be
    referenced soon.
  • Why does code have locality?
  • Our initial focus two levels (upper, lower)
  • block minimum unit of data
  • hit data requested is in the upper level
  • miss data requested is not in the upper level

122
Cache
  • Two issues
  • How do we know if a data item is in the cache?
  • If it is, how do we find it?
  • Our first example
  • block size is one word of data
  • "direct mapped"

For each item of data at the lower level, there
is exactly one location in the cache where it
might be. e.g., lots of items at the lower level
share locations in the upper level
123
Direct Mapped Cache
  • Mapping address is modulo the number of blocks
    in the cache

124
Direct Mapped Cache
  • For MIPS
  • What kind of locality are we taking
    advantage of?

125
Direct Mapped Cache
  • Taking advantage of spatial locality

126
Hits vs. Misses
  • Read hits
  • this is what we want!
  • Read misses
  • stall the CPU, fetch block from memory, deliver
    to cache, restart
  • Write hits
  • can replace data in cache and memory
    (write-through)
  • write the data only into the cache (write-back
    the cache later)
  • Write misses
  • read the entire block into the cache, then write
    the word

127
Hardware Issues
  • Make reading multiple words easier by using banks
    of memory
  • It can get a lot more complicated...

128
Performance
  • Increasing the block size tends to decrease miss
    rate
  • Use split caches because there is more spatial
    locality in code

129
Performance
  • Simplified model execution time (execution
    cycles stall cycles) cycle time stall
    cycles of instructions miss ratio miss
    penalty
  • Two ways of improving performance
  • decreasing the miss ratio
  • decreasing the miss penalty
  • What happens if we increase block size?

130
Decreasing miss ratio with associativity
  • Compared to direct mapped, give a series of
    references that
  • results in a lower miss ratio using a 2-way set
    associative cache
  • results in a higher miss ratio using a 2-way set
    associative cache
  • assuming we use the least recently used
    replacement strategy

131
An implementation
132
Performance
133
Decreasing miss penalty with multilevel caches
  • Add a second level cache
  • often primary cache is on the same chip as the
    processor
  • use SRAMs to add another cache above primary
    memory (DRAM)
  • miss penalty goes down if data is in 2nd level
    cache
  • Example
  • CPI of 1.0 on a 5 Ghz machine with a 5 miss
    rate, 100ns DRAM access
  • Adding 2nd level cache with 5ns access time
    decreases miss rate to .5
  • Using multilevel caches
  • try and optimize the hit time on the 1st level
    cache
  • try and optimize the miss rate on the 2nd level
    cache

134
Cache Complexities
  • Not always easy to understand implications of
    caches

Theoretical behavior of Radix sort vs. Quicksort
Observed behavior of Radix sort vs. Quicksort
135
Cache Complexities
  • Here is why
  • Memory system performance is often critical
    factor
  • multilevel caches, pipelined processors, make it
    harder to predict outcomes
  • Compiler optimizations to increase locality
    sometimes hurt ILP
  • Difficult to predict best algorithm need
    experimental data

136
Virtual Memory
  • Main memory can act as a cache for the secondary
    storage (disk)
  • Advantages
  • illusion of having more physical memory
  • program relocation
  • protection

137
Pages virtual memory blocks
  • Page faults the data is not in memory, retrieve
    it from disk
  • huge miss penalty, thus pages should be fairly
    large (e.g., 4KB)
  • reducing page faults is important (LRU is worth
    the price)
  • can handle the faults in software instead of
    hardware
  • using write-through is too expensive so we use
    writeback

138
Page Tables
139
Page Tables

140
Making Address Translation Fast
  • A cache for address translations translation
    lookaside buffer

Typical values 16-512 entries, miss-rate
.01 - 1 miss-penalty 10 100 cycles
141
TLBs and caches
142
TLBs and Caches
143
Modern Systems

144
Modern Systems
  • Things are getting complicated!

145
Some Issues
  • Processor speeds continue to increase very
    fast much faster than either DRAM or disk
    access times
  • Design challenge dealing with this growing
    disparity
  • Prefetching? 3rd level caches and more? Memory
    design?

146
Chapters 8 9
  • (partial coverage)

147
Interfacing Processors and Peripherals
  • I/O Design affected by many factors
    (expandability, resilience)
  • Performance access latency throughput
    connection between devices and the system the
    memory hierarchy the operating system
  • A variety of different users (e.g., banks,
    supercomputers, engineers)

148
I/O
  • Important but neglected The difficulties in
    assessing and designing I/O systems have often
    relegated I/O to second class status courses
    in every aspect of computing, from programming
    to computer architecture often ignore I/O or
    give it scanty coverage textbooks leave the
    subject to near the end, making it easier for
    students and instructors to skip it!
  • GUILTY! we wont be looking at I/O in much
    detail be sure and read Chapter 8 in its
    entirety. you should probably take a
    networking class!

149
I/O Devices
  • Very diverse devices behavior (i.e., input vs.
    output) partner (who is at the other end?)
    data rate

150
I/O Example Disk Drives
  • To access data seek position head over the
    proper track (3 to 14 ms. avg.) rotational
    latency wait for desired sector (.5 / RPM)
    transfer grab the data (one or more sectors)
    30 to 80 MB/sec

151
I/O Example Buses
  • Shared communication link (one or more wires)
  • Difficult design may be bottleneck length
    of the bus number of devices tradeoffs
    (buffers for higher bandwidth increases
    latency) support for many different devices
    cost
  • Types of buses processor-memory (short high
    speed, custom design) backplane (high speed,
    often standardized, e.g., PCI) I/O (lengthy,
    different devices, e.g., USB, Firewire)
  • Synchronous vs. Asynchronous use a clock and a
    synchronous protocol, fast and small but every
    device must operate at same rate and clock skew
    requires the bus to be short dont use a clock
    and instead use handshaking

152
I/O Bus Standards
  • Today we have two dominant bus standards

153
Other important issues
  • Bus Arbitration daisy chain arbitration (not
    very fair) centralized arbitration (requires
    an arbiter), e.g., PCI collision detection,
    e.g., Ethernet
  • Operating system polling interrupts
    direct memory access (DMA)
  • Performance Analysis techniques queuing
    theory simulation analysis, i.e., find the
    weakest link (see I/O System Design)
  • Many new developments

154
Pentium 4
  • I/O Options

155
Fallacies and Pitfalls
  • Fallacy the rated mean time to failure of disks
    is 1,200,000 hours, so disks practically never
    fail.
  • Fallacy magnetic disk storage is on its last
    legs, will be replaced.
  • Fallacy A 100 MB/sec bus can transfer 100
    MB/sec.
  • Pitfall Moving functions from the CPU to the
    I/O processor, expecting to improve performance
    without analysis.

156
Multiprocessors
  • Idea create powerful computers by connecting
    many smaller ones good news works for
    timesharing (better than supercomputer) bad
    news its really hard to write good concurrent
    programs many commercial failures

157
Questions
  • How do parallel processors share data? single
    address space (SMP vs. NUMA) message passing
  • How do parallel processors coordinate?
    synchronization (locks, semaphores) built into
    send / receive primitives operating system
    protocols
  • How are they implemented? connected by a
    single bus connected by a network

158
Supercomputers
Plot of top 500 supercomputer sites over a decade
159
Using multiple processors an old idea
  • Some SIMD designs
  • Costs for the the Illiac IV escalated from 8
    million in 1966 to 32 million in 1972 despite
    completion of only ¼ of the machine. It took
    three more years before it was operational!
    For better or worse, computer architects are not
    easily discouragedLots of interesting designs
    and ideas, lots of failures, few successes

160
Topologies
161
Clusters
  • Constructed from whole computers
  • Independent, scalable networks
  • Strengths
  • Many applications amenable to loosely coupled
    machines
  • Exploit local area networks
  • Cost effective / Easy to expand
  • Weaknesses
  • Administration costs not necessarily lower
  • Connected using I/O bus
  • Highly available due to separation of memories
  • In theory, we should be able to do better

162
Google
  • Serve an average of 1000 queries per second
  • Google uses 6,000 processors and 12,000 disks
  • Two sites in silicon valley, two in Virginia
  • Each site connected to internet using OC48 (2488
    Mbit/sec)
  • Reliability
  • On an average day, 20 machines need rebooted
    (software error)
  • 2 of the machines replaced each year
  • In some sense, simple ideas well executed.
    Better (and cheaper) than other approaches
    involving increased complexity

163
Concluding Remarks
  • Evolution vs. Revolution More often the
    expense of innovation comes from being too
    disruptive to computer users Acceptan
    ce of hardware ideas requires acceptance by
    software people therefore hardware people should
    learn about software. And if software people
    want good machines, they must learn more about
    hardware to be able to communicate with and
    thereby influence hardware engineers.
About PowerShow.com