Structure of Computer Systems - PowerPoint PPT Presentation

1 / 31
About This Presentation
Title:

Structure of Computer Systems

Description:

Structure of Computer Systems – PowerPoint PPT presentation

Number of Views:57
Avg rating:3.0/5.0
Slides: 32
Provided by: baruch
Category:

less

Transcript and Presenter's Notes

Title: Structure of Computer Systems


1
Structureof Computer Systems
  • Dr. Zoltan Francisc Baruch
  • Computer Science Department
  • Technical University of Cluj-Napoca

2
Course Information
  • Course grading
  • 25 Laboratory ? colloquy
  • 25 Project ? written report
  • 50 Exam (15 mid-term, 35 final)
  • Minimum grade for each activity 5
  • Tests ? lecture
  • Course website
  • http//users.utcluj.ro/baruch/en/
  • Teaching ? Structure of Computer Systems

3
Bibliography (1)
  • Baruch, Z. F., Structure of Computer Systems,
    U.T.PRES, Cluj-Napoca, 2002, ISBN 973-8335-44-2

4
Bibliography (2)
  • Baruch, Z. F., Structure of Computer Systems (in
    Romanian), Ed. Albastra, Cluj-Napoca, 2005, ISBN
    973-650-143-4

5
Bibliography (3)
  • Baruch, Z. F., Structure of Computer Systems with
    Applications, U. T. PRES, Cluj-Napoca, 2003, ISBN
    973-8335-89-2

6
Contents of the Lecture
  • 1. Introduction
  • 2. Arithmetic-Logic Unit
  • 3. Memory Systems
  • 4. Pipeline Systems
  • 5. RISC Architectures
  • 6. Advanced Architectures

7
1. Introduction
  • Performance Metrics
  • Execution Time
  • CPU Time
  • MIPS
  • MFLOPS
  • Benchmark Programs
  • Amdahls Law

8
Execution Time (1)
  • Performance of a computer refers to
  • Speed
  • Hardware and software reliability
  • The measure of performance execution time (tE)
  • Response time (elapsed time) the time required
    to complete a task
  • Includes memory accesses, input/output
    activities, and operating system overhead

9
Execution Time (2)
  • CPU time the time the CPU is computing
  • Does not include the time waiting for I/O
    operations
  • Does not include the time for running other
    programs
  • Can be divided into
  • User CPU time
  • System CPU time

10
Execution Time (3)
  • Comparing the performance of two different
    computers, e.g., X and Y
  • Computer X is faster than Y when the execution
    time of X is lower than that of Y for a given
    task
  • Computer X is n faster than Y will mean

11
Execution Time (4)
  • Since execution time tE is reciprocal of
    performance P
  • The performance increase (n) will be
  • Example 1.1

12
1. Introduction
  • Performance Metrics
  • Execution Time
  • CPU Time
  • MIPS
  • MFLOPS
  • Benchmark Programs
  • Amdahls Law

13
CPU Time (1)
  • CPU time (tCPU) can be expressed as
  • CCPU number of CPU clock cycles to execute the
    program
  • tC clock cycle time
  • Another expression
  • f clock frequency

14
CPU Time (2)
  • The number of instructions executed may also be
    considered ? instruction count N
  • The average number of clock cycles per
    instruction (CPI)
  • CPU time can be defined as

15
CPU Time (3)
  • or
  • The number of total CPU cycles
  • CPIi number of clock cycles for instruction i
  • Ii number of times instruction i is executed

16
CPU Time (4)
  • CPU time
  • Overall no. of clock cycles per instruction
  • Fi frequency of instruction i
  • Example 1.2

17
1. Introduction
  • Performance Metrics
  • Execution Time
  • CPU Time
  • MIPS
  • MFLOPS
  • Benchmark Programs
  • Amdahls Law

18
MIPS (1)
  • The most important measure of performance the
    execution time of real programs
  • A number of popular measures have been adopted
    for computer performance
  • One of the performance metrics is called MIPS
    (Millions of Instructions Per Second)
  • Indicates the number of average instructions
    that a computer can execute per second

19
MIPS (2)
  • For a given program, MIPS is defined as
  • N instruction count
  • Assuming that tE tCPU ,
  • We get

20
MIPS (3)
  • Execution time expressed as a function of MIPS
  • A metric defined similarly BIPS (Billions of
    Instructions Per Second) or GIPS
  • The advantage of MIPS it is easy to understand,
    especially by a customer
  • There are problems with using MIPS as a measure
    for comparison

21
MIPS (4)
  • MIPS is dependent on the instruction set
  • MIPS varies between programs on the same computer
  • MIPS can vary inversely to performance
  • Example of the last case a computer with
    optional floating-point coprocessor
  • Programs using the coprocessor take less time,
    but have a lower MIPS rating
  • Example 1.3

22
1. Introduction
  • Performance Metrics
  • Execution Time
  • CPU Time
  • MIPS
  • MFLOPS
  • Benchmark Programs
  • Amdahls Law

23
MFLOPS (1)
  • MIPS is not a good metric for computers that
    perform scientific and engineering computation
  • It is important to measure the number of
    floating-point (FP) operations
  • MFLOPS (Millions of Floating-point Operations Per
    Second), GFLOPS, TFLOPS, PFLOPS
  • The formula

24
MFLOPS (2)
  • NFP number of FP operations
  • A MFLOPS rating is dependent on the machine and
    on the program
  • Problems related to MFLOPS
  • The set of FP operations is not consistent across
    machines
  • The MFLOPS rating changes on
  • The combination of integer and FP operations
  • The combination of fast and slow FP operations

25
MFLOPS (3)
  • The solution to both problems to use normalized
    FP operations
  • Example for calculating the number of normalized
    FP operations according to the real operations in
    the source code

26
MFLOPS (4)
  • The real FP operations give the native MFLOPS
    rating
  • The normalized FP operations give the normalized
    MFLOPS rating
  • The MIPS and MFLOPS metrics are useful for
    comparing members of the same architectural
    family
  • These metrics are not appropriate for comparing
    computers with different instruction sets

27
MFLOPS (5)
  • However, MFLOPS is used in some benchmark
    programs to evaluate the performance of
    supercomputers
  • Example the Linpack benchmark
  • Software library for numerical linear algebra
    (vector or matrix) operations
  • HPL (High Performance Linpack) portable
    implementation of Linpack used for the TOP500 list

28
MFLOPS (6)
  • TOP500 ranks the 500 most powerful publicly
    known computers in the world
  • http//www.top500.org/
  • The current list released in June 2008
  • No. 1 in the list is the IBM Roadrunner
  • Installed for the National Nuclear Security
    Administration (Department of Energy, USA) at Los
    Alamos National Laboratory
  • Peak performance 1.026 PFLOPS

29
MFLOPS (7)
  • The first hybrid supercomputer
  • 6,562 dual-core AMD Opteron chips (1.8 GHz)
  • 12,240 IBM PowerXCell 8i chips (3.2 GHz 12.8
    GFLOPS)
  • Memory 98 TB (internal), 2 PB (external)
  • Operating system Red Hat Linux
  • Power consumption 3.9 MW ? one of the most
    energy efficient with 376 MFLOPS/W
  • Occupies 278 racks

30
MFLOPS (8)
  • The IBM Roadrunner installed at Los Alamos
    National Laboratory (New Mexico, USA)

31
Questions
  • What are the differences between response time
    and CPU time?
  • How can be expressed the CPU time based on the
    average number of clock cycles per instruction?
  • What are the disadvantages of the MIPS metric?
  • What are the problems related to the MFLOPS
    rating?
Write a Comment
User Comments (0)
About PowerShow.com