CSE 520 Advanced Computer Architecture Lec 3: Role of Performance and Tracking Technology - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

CSE 520 Advanced Computer Architecture Lec 3: Role of Performance and Tracking Technology

Description:

CSE 520 Advanced Computer Architecture. Lec 3: Role of Performance and ... CDC Wren I, 1983. 3600 RPM. 0.03 GBytes capacity. Tracks/Inch: 800. Bits/Inch: 9550 ... – PowerPoint PPT presentation

Number of Views:182
Avg rating:3.0/5.0
Slides: 56
Provided by: impac1
Category:

less

Transcript and Presenter's Notes

Title: CSE 520 Advanced Computer Architecture Lec 3: Role of Performance and Tracking Technology


1
CSE 520 Advanced Computer Architecture Lec 3
Role of Performance and Tracking Technology
  • Sandeep K. S. Gupta
  • School of Computing and Informatics
  • Arizona State University

Based on Slides by David Patterson and M. Younis
2
In response to Q on GPUs
  • GPGPU General purpose computing using Graphic
    Processors
  • See http//en.wikipedia.org/wiki/GPGPU
  • Note the architectural differences between
    general-purpose processors and GPUs
  • GPUs are designed to exploit Data Parallelism in
    Stream processing modes.
  • See
  • the tutorial http//www.gpgpu.org/s2005/
  • Survey paper A Survey of General-Purpose
    Computation on Graphics Hardware, J. Owens et al.
    http//www.blackwell-synergy.com/doi/pdf/10.1111/j
    .1467-8659.2007.01012.x?cookieSet1

3
Crossroads Conventional Wisdom in Comp. Arch
  • Old Conventional Wisdom Power is free,
    Transistors expensive
  • New Conventional Wisdom Power wall Power
    expensive, Xtors free (Can put more on chip than
    can afford to turn on)
  • Old CW Sufficiently increasing Instruction Level
    Parallelism via compilers, innovation
    (Out-of-order, speculation, VLIW, )
  • New CW ILP wall law of diminishing returns on
    more HW for ILP
  • Old CW Multiplies are slow, Memory access is
    fast
  • New CW Memory wall Memory slow, multiplies
    fast (200 clock cycles to DRAM memory, 4 clocks
    for multiply)
  • Old CW Uniprocessor performance 2X / 1.5 yrs
  • New CW Power Wall ILP Wall Memory Wall
    Brick Wall
  • Uniprocessor performance now 2X / 5(?) yrs
  • ? Sea change in chip design multiple cores
    (2X processors per chip / 2 years)
  • More simpler processors are more power efficient

4
Agenda
  • Review from Last Class
  • Tracking Technology
  • Quantifying Power Consumption

5
Computer Architecture is Design and Analysis
  • Architecture is an iterative process
  • Searching the space of possible designs
  • At all levels of computer systems

Creativity
Cost / Performance Analysis
Good Ideas
Mediocre Ideas
Bad Ideas
6
What Computer Architecture brings to Table
  • Other fields often borrow ideas from architecture
  • Quantitative Principles of Design
  • Take Advantage of Parallelism
  • Principle of Locality
  • Focus on the Common Case
  • Amdahls Law
  • The Processor Performance Equation
  • Careful, quantitative comparisons
  • Define, quantity, and summarize relative
    performance
  • Define and quantity relative cost
  • Define and quantity dependability
  • Define and quantity power
  • Culture of anticipating and exploiting advances
    in technology
  • Culture of well-defined interfaces that are
    carefully implemented and thoroughly checked

7
Chapter 1 Fundamentals of Computer Design
  • Quantative Principles of Design
  • Technology Trends Culture of tracking,
    anticipating and exploiting advances in
    technology
  • Careful, quantitative comparisons
  • Define, quantity, and summarize relative
    performance
  • Define and quantity relative cost
  • Define and quantity dependability
  • Define and quantity power

8
4) Amdahls Law
Best you could ever hope to do
9
Amdahls Law example
  • New CPU 10X faster
  • I/O bound server, so 60 time waiting for I/O
  • Apparently, its human nature to be attracted by
    10X faster, vs. keeping in perspective its just
    1.6X faster

10
5) Processor performance equation
CPI
inst count
Cycle time
  • Inst Count CPI Clock Rate
  • Program X
  • Compiler X (X)
  • Inst. Set. X X
  • Organization X X
  • Technology X

11
And in conclusion
  • Computer Architecture gtgt instruction sets
  • Computer Architecture skill sets are applicable
    to other Programming endeavors
  • Computer System at the crossroads from sequential
    to parallel computing
  • Salvation requires innovation in many fields,
    including computer architecture
  • Quantative Fundamental Priciples
  • Take Advantage of Parallelism
  • Principle of Locality
  • Focus on the Common Case
  • Amdahls Law
  • The Processor Performance Equation
  • Read Chapter 1, then Appendix A

12
The Role of Performance
  • Hardware performance is a key to the
    effectiveness of the entire system
  • Performance has to be measured and compared to
    evaluate various design and technological
    approaches
  • To optimize the performance, major affecting
    factors have to be known
  • For different types of applications, different
    performance metrics may be appropriate and
    different aspects of a computer system may be
    most significant
  • Instructions use and implementation, memory
    hierarchy and I/O handling are among the factors
    that affect the performance

13
Computer Engineering Methodology
Technology Trends
Performance and cost are the main evaluation
metrics for a design quality
Slide is courtesy of Dave Patterson
14
Defining Performance
  • Performance means different things to different
    people,
  • therefore its assessment is subtle
  • Analogy from the airlines industry
  • How to measure performance for a passenger
    airplane?
  • Cruising speed (How fast it gets to the
    destination)
  • Flight range (How far it can reach)
  • Passenger capacity (How many passenger it can
    carry)
  • All of the above

Criteria of performance evaluation differs among
users and designers
15
Performance Metrics
  • Response (execution) time
  • The time between the start and the completion of
    a task
  • Measures user perception of the system speed
  • Common in reactive and time critical systems,
    single-user computer, etc.
  • Throughput
  • The total number of tasks done in a given time
  • Most relevant to batch processing (billing,
    credit card processing, etc.)
  • Mainly used for input/output systems (disk
    access, printer, etc.)
  • Examples
  • Replacing the processor of a computer with a
    faster version
  • Adding additional processors to a system that
    uses multiple processors
  • for separate tasks (e.g. handling of airline
    reservations system)
  • Enhances BOTH response time and throughput
  • Improves ONLY throughput

Decreasing response time always improve throughput
16
Response-time Metric
  • Maximizing performance means minimizing response
  • (execution) time
  • Performance of Processor P1 is better than P2
    if, for a given
  • work load L, P1 takes less time to execute L
    than P2 does
  • Relative performance capture the performance
    ratio of
  • Processor P1 compared to P2 is, for the same
    work load

17
Designers Performance Metrics
  • Users and designers measure performance using
    different
  • metrics
  • Designers looks at the bottom line of program
    execution
  • To enhance the hardware performance, designers
    focuses on
  • reducing the clock cycle time and the number
    of cycles per
  • program
  • Many techniques to decrease the number of clock
    cycles also
  • increase the clock cycle time or the average
    number of cycles
  • per instruction (CPI)

18
Example
A program runs in 10 seconds on a computer A
that has 400 MHz clock. It is desirable to design
a faster computer B that could run the program
in 6 seconds. The designer has determined that a
substantial increase in the clock speed is
possible, however it would cause computer B to
require 1.2 times as many clock cycles as
computer A. What should be the clock rate of
computer B?
19
Example
To get the clock rate of the faster computer, we
use the same formula
20
Calculation of CPU Time
CPU time Instruction count ? CPI ? Clock
cycle time Or
21
CPU Time (Cont.)
  • CPU execution time can be measured by running
    the program
  • The clock cycle is usually published by the
    manufacturer
  • Measuring the CPI and instruction count is not
    trivial
  • Instruction counts can be measured by a
    software profiling, using an
  • architecture simulator, using hardware
    counters on some architecture
  • The CPI depends on many factors including
    processor structure,
  • memory system, the mix of instruction types
    and the implementation
  • of these instructions
  • Designers sometimes uses the following formula

Where Ci is the count of number of
instructions of class i executed
CPIi is the average number of cycles per
instruction for that instruction class
n is the number of different instruction
classes
22
Example Processor Performance Equation
  • Suppose two implementations of same ISA
  • M/c A clock cycle time 1ns CPI 2.0 (for
    some program)
  • M/c B clock cycle time 2ns CPI 1.2 (for
    same program)
  • Which M/c is faster?

23
Example Processor Performance Equation (Cont.)
  • Suppose two implementations of same ISA
  • M/c A clock cycle time 1ns CPI 2.0 (for
    some program)
  • M/c B clock cycle time 2ns CPI 1.2 (for
    same program)
  • Which M/c is faster?
  • Answer
  • Fact each m/c executes same number of
    instructions N
  • Number of processor clock cycles per M/c
  • M/c A N x 2.0 2N
  • M/c B N x 1.2 1.2N
  • CPU time for each M/c
  • M/c A (2N)x 1 ns 2N ns
  • M/c B (1.2N) x 2 ns 2.4N ns
  • (CPU Perf. A)/(CPU Perf. B) (CPU time B)/(CPU
    time A) (2.4N ns)/(2N ns) 1.2
  • Conclusion M/c A is 1.2 times faster than M/c B
    for this program.

24
Comparing Code Segments
A compiler designer is trying to decide between
two code sequences for a particular machine. The
hardware designers have supplied the following
facts
For a particular high-level language statement,
the compiler writer is considering two code
sequences that require the following instruction
counts
Which code sequence executes the most
instructions? Which will be faster? What is the
CPI for each sequence?
25
Comparing Code Segments (Cont.)
Which code sequence executes the most
instructions? Which will be faster? What is the
CPI for each sequence?
Answer Sequence 1 executes 2 1 2 5
instructions Sequence 2 executes 4 1 1 6
instructions ?
26
Comparing Code Segments (Cont.)
Using the formula
Sequence 1 CPU clock cycles (2 ?1) (1 ?2)
(2 ?3) 10 cycles Sequence 2 CPU clock cycles
(4 ?1) (1 ?2) (1 ?3) 9 cycles
  • Therefore Sequence 2 is faster although it
    executes more instructions

Sequence 1 CPI 10/5 2 Sequence 2 CPI 9/6
1.5
  • Since Sequence 2 takes fewer overall clock
    cycles but has more
  • instructions it must have a lower CPI

27
Can Hardware-Independent Metrics Predict
Performance?
  • The Burroughs B5500 machine is designed
    specifically for Algol 60 programs
  • Although CDC 6600s programs are over 3 times as
    big as those of B5500,
  • yet the CDC machine runs them almost 6 times
    faster
  • Code size cannot be used as an indication for
    performance

28
Using MIPS as a Performance Metric
  • MIPS stands for Million Instructions Per Second
    and is one of
  • the simplest metrics, which is valid in a
    limited context
  • There are three problems with MIPS
  • MIPS specifies the instruction execution rate
    but does not take into
  • account the capabilities of the instructions
  • Computers does not have the same MIPS rating, as
    MIPS varies
  • between programs on the same computer
  • MIPS can vary inversely with performance (see
    next example)

The use of MIPS is simple but may lead to wrong
conclusions.
29
Example Misleading Results Using MIPS
Consider the machine with the following three
instruction classes and CPI
Now suppose we measure the code for the same
program from two different compilers and obtain
the following data
Assume that the machines clock rate is 500 MHz.
Which code sequence will execute faster according
to MIPS? According to execution time?
30
Example Misleading Results Using MIPS (Cont.)
Machines clock rate is 500 MHz. Which code
sequence will execute faster according to MIPS?
According to execution time?
Answer
Using the formula
Sequence 1 CPU clock cycles (5 ?1 1 ?2 1
?3) ? 109 10?109 cycles Sequence 2 CPU clock
cycles (10 ?1 1 ?2 1 ?3) ? 109 15?109
cycles
31
Example Misleading Results Using MIPS (Cont.)
Sequence 1 Execution time (10?109)/(500?106)
20 seconds Sequence 2 Execution time
(15?109)/(500?106) 30 seconds
Therefore compiler 1 generates a faster program
Although compiler 2 has a higher MIPS rating, the
code generated by compiler 1 runs faster
32
Performance Metrics - Summary
  • Maximizing performance means
  • minimizing response (execution) time

Figure is courtesy of Dave Patterson
33
Chapter 1 Fundamentals of Computer Design
  • Technology Trends Culture of tracking,
    anticipating and exploiting advances in
    technology
  • Careful, quantitative comparisons
  • Define and quantity power
  • Define and quantity dependability
  • Define, quantity, and summarize relative
    performance
  • Define and quantity relative cost

34
Moores Law 2X transistors / year
  • Cramming More Components onto Integrated
    Circuits
  • Gordon Moore, Electronics, 1965
  • on transistors / cost-effective integrated
    circuit double every N months (12 N 24)

35
Tracking Technology Performance Trends
  • Drill down into 4 technologies
  • Disks,
  • Memory,
  • Network,
  • Processors
  • Compare 1980 Archaic (Nostalgic) vs. 2000
    Modern (Newfangled)
  • Performance Milestones in each technology
  • Compare for Bandwidth vs. Latency improvements in
    performance over time
  • Bandwidth number of events per unit time
  • E.g., M bits / second over network, M bytes /
    second from disk
  • Latency elapsed time for a single event
  • E.g., one-way network delay in microseconds,
    average disk access time in milliseconds

36
Disks Archaic(Nostalgic) v. Modern(Newfangled)
  • Seagate 373453, 2003
  • 15000 RPM (4X)
  • 73.4 GBytes (2500X)
  • Tracks/Inch 64000 (80X)
  • Bits/Inch 533,000 (60X)
  • Four 2.5 platters (in 3.5 form factor)
  • Bandwidth 86 MBytes/sec (140X)
  • Latency 5.7 ms (8X)
  • Cache 8 MBytes
  • CDC Wren I, 1983
  • 3600 RPM
  • 0.03 GBytes capacity
  • Tracks/Inch 800
  • Bits/Inch 9550
  • Three 5.25 platters
  • Bandwidth 0.6 MBytes/sec
  • Latency 48.3 ms
  • Cache none

37
Latency Lags Bandwidth (for last 20 years)
  • Performance Milestones
  • Disk 3600, 5400, 7200, 10000, 15000 RPM (8x,
    143x)

(latency simple operation w/o contention BW
best-case)
38
Memory Archaic (Nostalgic) v. Modern (Newfangled)
  • 2000 Double Data Rate Synchr. (clocked) DRAM
  • 256.00 Mbits/chip (4000X)
  • 256,000,000 xtors, 204 mm2
  • 64-bit data bus per DIMM, 66 pins/chip (4X)
  • 1600 Mbytes/sec (120X)
  • Latency 52 ns (4X)
  • Block transfers (page mode)
  • 1980 DRAM (asynchronous)
  • 0.06 Mbits/chip
  • 64,000 xtors, 35 mm2
  • 16-bit data bus per module, 16 pins/chip
  • 13 Mbytes/sec
  • Latency 225 ns
  • (no block transfer)

39
Latency Lags Bandwidth (last 20 years)
  • Performance Milestones
  • Memory Module 16bit plain DRAM, Page Mode DRAM,
    32b, 64b, SDRAM, DDR SDRAM (4x,120x)
  • Disk 3600, 5400, 7200, 10000, 15000 RPM (8x,
    143x)

(latency simple operation w/o contention BW
best-case)
40
LANs Archaic (Nostalgic)v. Modern (Newfangled)
  • Ethernet 802.3
  • Year of Standard 1978
  • 10 Mbits/s link speed
  • Latency 3000 msec
  • Shared media
  • Coaxial cable
  • Ethernet 802.3ae
  • Year of Standard 2003
  • 10,000 Mbits/s (1000X)link speed
  • Latency 190 msec (15X)
  • Switched media
  • Category 5 copper wire

Coaxial Cable
Plastic Covering
Braided outer conductor
Insulator
Copper core
41
Latency Lags Bandwidth (last 20 years)
  • Performance Milestones
  • Ethernet 10Mb, 100Mb, 1000Mb, 10000 Mb/s
    (16x,1000x)
  • Memory Module 16bit plain DRAM, Page Mode DRAM,
    32b, 64b, SDRAM, DDR SDRAM (4x,120x)
  • Disk 3600, 5400, 7200, 10000, 15000 RPM (8x,
    143x)

(latency simple operation w/o contention BW
best-case)
42
CPUs Archaic (Nostalgic) v. Modern (Newfangled)
  • 2001 Intel Pentium 4
  • 1500 MHz (120X)
  • 4500 MIPS (peak) (2250X)
  • Latency 15 ns (20X)
  • 42,000,000 xtors, 217 mm2
  • 64-bit data bus, 423 pins
  • 3-way superscalar,Dynamic translate to RISC,
    Superpipelined (22 stage),Out-of-Order execution
  • On-chip 8KB Data caches, 96KB Instr. Trace
    cache, 256KB L2 cache
  • 1982 Intel 80286
  • 12.5 MHz
  • 2 MIPS (peak)
  • Latency 320 ns
  • 134,000 xtors, 47 mm2
  • 16-bit data bus, 68 pins
  • Microcode interpreter, separate FPU chip
  • (no caches)

43
Latency Lags Bandwidth (last 20 years)
  • Performance Milestones
  • Processor 286, 386, 486, Pentium, Pentium
    Pro, Pentium 4 (21x,2250x)
  • Ethernet 10Mb, 100Mb, 1000Mb, 10000 Mb/s
    (16x,1000x)
  • Memory Module 16bit plain DRAM, Page Mode DRAM,
    32b, 64b, SDRAM, DDR SDRAM (4x,120x)
  • Disk 3600, 5400, 7200, 10000, 15000 RPM (8x,
    143x)

44
Rule of Thumb for Latency Lagging BW
  • In the time that bandwidth doubles, latency
    improves by no more than a factor of 1.2 to 1.4
  • (and capacity improves faster than bandwidth)
  • Stated alternatively Bandwidth improves by more
    than the square of the improvement in Latency

45
Computers in the News
  • Intel loses market share in own backyard, By
    Tom Krazit, CNET News.com, 1/18/2006
  • Intel's share of the U.S. retail PC market fell
    by 11 percentage points, from 64.4 percent in the
    fourth quarter of 2004 to 53.3 percent. Current
    Analysis' market share numbers measure U.S.
    retail sales only, and therefore exclude figures
    from Dell, which uses its Web site to sell
    directly to consumers. AMD chips were found in
    52.5 percent of desktop PCs sold in U.S. retail
    stores during that period.
  • We will technically compare AMD Opteron/Athlon
    vs. Intel Pentium 4 later in this course.

46
6 Reasons Latency Lags Bandwidth
  • 1. Moores Law helps BW more than latency
  • Faster transistors, more transistors, more pins
    help Bandwidth
  • MPU Transistors 0.130 vs. 42 M xtors (300X)
  • DRAM Transistors 0.064 vs. 256 M xtors (4000X)
  • MPU Pins 68 vs. 423 pins (6X)
  • DRAM Pins 16 vs. 66 pins (4X)
  • Smaller, faster transistors but communicate over
    (relatively) longer lines limits latency
  • Feature size 1.5 to 3 vs. 0.18 micron (8X,17X)
  • MPU Die Size 35 vs. 204 mm2 (ratio sqrt ? 2X)
  • DRAM Die Size 47 vs. 217 mm2 (ratio sqrt ?
    2X)

47
6 Reasons Latency Lags Bandwidth (contd)
  • 2. Distance limits latency
  • Size of DRAM block ? long bit and word lines ?
    most of DRAM access time
  • Speed of light and computers on network
  • 1. 2. explains linear latency vs. square BW?
  • 3. Bandwidth easier to sell (biggerbetter)
  • E.g., 10 Gbits/s Ethernet (10 Gig) vs. 10
    msec latency Ethernet
  • 4400 MB/s DIMM (PC4400) vs. 50 ns latency
  • Even if just marketing, customers now trained
  • Since bandwidth sells, more resources thrown at
    bandwidth, which further tips the balance

48
6 Reasons Latency Lags Bandwidth (contd)
  • 4. Latency helps BW, but not vice versa
  • Spinning disk faster improves both bandwidth and
    rotational latency
  • 3600 RPM ? 15000 RPM 4.2X
  • Average rotational latency 8.3 ms ? 2.0 ms
  • Things being equal, also helps BW by 4.2X
  • Lower DRAM latency ? More access/second (higher
    bandwidth)
  • Higher linear density helps disk BW (and
    capacity), but not disk Latency
  • 9,550 BPI ? 533,000 BPI ? 60X in BW

49
6 Reasons Latency Lags Bandwidth (contd)
  • 5. Bandwidth hurts latency
  • Queues help Bandwidth, hurt Latency (Queuing
    Theory)
  • Adding chips to widen a memory module increases
    Bandwidth but higher fan-out on address lines may
    increase Latency
  • 6. Operating System overhead hurts Latency more
    than Bandwidth
  • Long messages amortize overhead overhead bigger
    part of short messages

50
Summary of Technology Trends
  • For disk, LAN, memory, and microprocessor,
    bandwidth improves by square of latency
    improvement
  • In the time that bandwidth doubles, latency
    improves by no more than 1.2X to 1.4X
  • Lag probably even larger in real systems, as
    bandwidth gains multiplied by replicated
    components
  • Multiple processors in a cluster or even in a
    chip
  • Multiple disks in a disk array
  • Multiple memory modules in a large memory
  • Simultaneous communication in switched LAN
  • HW and SW developers should innovate assuming
    Latency Lags Bandwidth
  • If everything improves at the same rate, then
    nothing really changes
  • When rates vary, require real innovation

51
Outline
  • Technology Trends Culture of tracking,
    anticipating and exploiting advances in
    technology
  • Careful, quantitative comparisons
  • Define and quantity power
  • Define and quantity dependability
  • Define, quantity, and summarize relative
    performance
  • Define and quantity relative cost

52
Define and quantity power ( 1 / 2)
  • For CMOS chips, traditional dominant energy
    consumption has been in switching transistors,
    called dynamic power
  • For mobile devices, energy better metric
  • For a fixed task, slowing clock rate (frequency
    switched) reduces power, but not energy
  • Capacitive load a function of number of
    transistors connected to output and technology,
    which determines capacitance of wires and
    transistors
  • Dropping voltage helps both, so went from 5V to
    1V
  • To save energy dynamic power, most CPUs now
    turn off clock of inactive modules (e.g. Fl. Pt.
    Unit)

53
Example of quantifying power
  • Suppose 15 reduction in voltage results in a 15
    reduction in frequency. What is impact on dynamic
    power?

54
Define and quantity power (2 / 2)
  • Because leakage current flows even when a
    transistor is off, now static power important too
  • Leakage current increases in processors with
    smaller transistor sizes
  • Increasing the number of transistors increases
    power even if they are turned off
  • In 2006, goal for leakage is 25 of total power
    consumption high performance designs at 40
  • Very low power systems even gate voltage to
    inactive modules to control loss due to leakage

55
Summary
  • Execution (CPU) time is the only true measure of
    performance.
  • One must be careful when using other measures
    such as MIPS.
  • Computer architects (Industry) need to be aware
    of Technology trends to design computer
    architectures which address the various walls.
  • Increasing proportion of Static (or leakage)
    current (in comparison to Dynamic current) is a
    cause of concern
  • One of the motivation for multicore design is to
    reduce Thermal dissipation
  • Next Class Dependability, Comparing
    RelativePerformance/Cost
Write a Comment
User Comments (0)
About PowerShow.com