P573 Scientific Computing Lecture 1: Introduction - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

P573 Scientific Computing Lecture 1: Introduction

Description:

At black hole center spacetime breaks down. Critical test of theories of gravity ... Slide source: Jack Dongarra. 01/04/2006. P573-Lecture 1. 24. Impact of ... – PowerPoint PPT presentation

Number of Views:272
Avg rating:3.0/5.0
Slides: 53
Provided by: kathe93
Category:

less

Transcript and Presenter's Notes

Title: P573 Scientific Computing Lecture 1: Introduction


1
P573Scientific ComputingLecture 1
Introduction
  • Peter Gottschling
  • pgottsch_at_cs.indiana.edu
  • www.osl.iu.edu/pgottsch/courses/p573-06

Based on slides from UC Berkeley www.cs.berkeley.e
du/demmel/cs267_Spr05
2
Outline
  • Introduction
  • Large important problems require powerful
    computers
  • Why powerful computers must be parallel
    processors
  • Why writing (fast) parallel programs is hard
  • Structure of the course

3
Course Organization
4
Who is in the class?
  • This class is listed as a CS class
  • Normally a mix of CS and other engineering and
    science students
  • This class seems to be about
  • 74 Computer Science
  • 8 Mathematics
  • 6 Informatics
  • 3 Art and Science
  • 3 Astronomy
  • 3 Chemistry
  • 3 Physics
  • We encourage interdisciplinary teams
  • This is the way scientific software is generally
    built

5
Rough Schedule of Topics
  • Introduction
  • Technologies
  • Mathematica when focused on math
  • C to write fast programs
  • Performance Tools
  • PETSc
  • Algorithms
  • Dense Linear Algebra
  • Particle methods
  • Partial Differential Equations (PDEs)
  • Sparse matrices
  • Graph algorithms
  • Home works
  • In groups of 3 students
  • To check in under subversion
  • Final exam
  • Personal engagement in home work will pay off here

6
What you should get out of the course
  • In depth understanding of
  • How to solve scientific problems with numerical
    programs
  • How this programs run fast
  • By understanding basics of hardware features
  • Basic understanding of computer accuracy
  • Overview of tools.
  • Some important scientific applications and the
    algorithms
  • Performance analysis and tuning

7
Why we need powerful computers
8
Units of Measure in HPC
  • High Performance Computing (HPC) units are
  • Flops floating point operations
  • Flops/s floating point operations per second
  • Bytes size of data (a double precision floating
    point number is 8)
  • Typical sizes are millions, billions, trillions
  • Mega Mflop/s 106 flop/sec Mbyte 220 1048576
    106 bytes
  • Giga Gflop/s 109 flop/sec Gbyte 230 109
    bytes
  • Tera Tflop/s 1012 flop/sec Tbyte 240 1012
    bytes
  • Peta Pflop/s 1015 flop/sec Pbyte 250 1015
    bytes
  • Exa Eflop/s 1018 flop/sec Ebyte 260 1018
    bytes
  • Zetta Zflop/s 1021 flop/sec Zbyte 270 1021
    bytes
  • Yotta Yflop/s 1024 flop/sec Ybyte 280 1024
    bytes

9
Simulation The Third Pillar of Science
  • Traditional scientific and engineering paradigm
  • Do theory or paper design.
  • Perform experiments or build system.
  • Limitations
  • Too difficult -- build large wind tunnels.
  • Too expensive -- build a throw-away passenger
    jet.
  • Too slow -- wait for climate or galactic
    evolution.
  • Too dangerous -- weapons, drug design, climate
    experimentation.
  • Computational science paradigm
  • Use high performance computer systems to simulate
    the phenomenon
  • Base on known physical laws and efficient
    numerical methods.

10
Some Particularly Challenging Computations
  • Science
  • Global climate modeling
  • Biology genomics protein folding drug design
  • Astrophysical modeling
  • Computational Chemistry
  • Computational Material Sciences and Nanosciences
  • Engineering
  • Semiconductor design
  • Earthquake and structural modeling
  • Computation fluid dynamics (airplane design)
  • Combustion (engine design)
  • Crash simulation
  • Business
  • Financial and economic modeling
  • Transaction processing, web services and search
    engines
  • Defense
  • Nuclear weapons -- test by simulations
  • Cryptography

11
Economic Impact of HPC
  • Airlines
  • System-wide logistics optimization systems on
    parallel systems.
  • Savings approx. 100 million per airline per
    year.
  • Automotive design
  • Major automotive companies use large systems
    (500 CPUs) for
  • CAD-CAM, crash testing, structural integrity and
    aerodynamics.
  • One company has 500 CPU parallel system.
  • Savings approx. 1 billion per company per year.
  • Semiconductor industry
  • Semiconductor firms use large systems (500 CPUs)
    for
  • device electronics simulation and logic
    validation
  • Savings approx. 1 billion per company per year.
  • Securities industry
  • Savings approx. 15 billion per year for U.S.
    home mortgages.

12
5B World Market in Technical Computing
Source IDC 2004, from NRC Future of
Supercomputer Report
13
Global Climate Modeling Problem
  • Problem is to compute
  • f(latitude, longitude, elevation, time) ?
  • temperature, pressure,
    humidity, wind velocity
  • Approach
  • Discretize the domain, e.g., a measurement point
    every 10 km
  • Devise an algorithm to predict weather at time
    tdt given t
  • Uses
  • Predict major events, e.g., El Nino
  • Use in setting air emissions standards

Source http//www.epm.ornl.gov/chammp/chammp.html
14
Global Climate Modeling Computation
  • One piece is modeling the fluid flow in the
    atmosphere
  • Solve Navier-Stokes equations
  • Roughly 100 Flops per grid point with 1 minute
    timestep
  • Computational requirements
  • To match real-time, need 5 x 1011 flops in 60
    seconds 8 Gflop/s
  • Weather prediction (7 days in 24 hours) ? 56
    Gflop/s
  • Climate prediction (50 years in 30 days) ? 4.8
    Tflop/s
  • To use in policy negotiations (50 years in 12
    hours) ? 288 Tflop/s
  • To double the grid resolution, computation is 8x
    to 16x
  • State of the art models require integration of
    atmosphere, ocean, sea-ice, land models, plus
    possibly carbon cycle, geochemistry and more
  • Current models are coarser than this

15
High Resolution Climate Modeling on NERSC-3 P.
Duffy, et al., LLNL
16
Climate Modeling on the Earth Simulator System
  • Development of ES started in 1997 in order to
    make a comprehensive understanding of global
    environmental changes such as global warming.
  • Its construction was completed at the end of
    February, 2002 and the practical operation
    started from March 1, 2002
  • 35.86Tflops (87.5 of the peak performance) is
    achieved in the Linpack benchmark.
  • 26.58Tflops was obtained by a global atmospheric
    circulation code.

17
Astrophysics Binary Black Hole Dynamics
  • Massive supernova cores collapse to black holes.
  • At black hole center spacetime breaks down.
  • Critical test of theories of gravity General
    Relativity to Quantum Gravity.
  • Indirect observation most galaxieshave a black
    hole at their center.
  • Gravity waves show black hole directly including
    detailed parameters.
  • Binary black holes most powerful sources of
    gravity waves.
  • Simulation extraordinarily complex evolution
    disrupts the space-time !

18
Heart Simulation
  • Problem is to compute blood flow in the heart
  • Approach
  • Modeled as an elastic structure in an
    incompressible fluid.
  • The immersed boundary method due to Peskin and
    McQueen.
  • 20 years of development in model
  • Many applications other than the heart blood
    clotting, inner ear, paper making, embryo growth,
    and others
  • Use a regularly spaced mesh (set of points) for
    evaluating the fluid
  • Uses
  • Current model can be used to design artificial
    heart valves
  • Can help in understand effects of disease (leaky
    valves)
  • Related projects look at the behavior of the
    heart during a heart attack
  • Ultimately real-time clinical work

19
Heart Simulation Calculation
  • The involves solving Navier-Stokes equations
  • 643 was possible on Cray YMP, but 1283 required
    for accurate model (would have taken 3 years).
  • Done on a Cray C90 -- 100x faster and 100x more
    memory
  • Until recently, limited to vector machines
  • Needs more features
  • Electrical model of the heart, and details of
    muscles, E.g.,
  • Chris Johnson
  • Andrew McCulloch
  • Lungs, circulatory systems

20
Parallel Computing in Data Analysis
  • Finding information amidst large quantities of
    data
  • General themes of sifting through large,
    unstructured data sets
  • Has there been an outbreak of some medical
    condition in a community?
  • Which doctors are most likely involved in
    fraudulent charging to medicare?
  • When should white socks go on sale?
  • What advertisements should be sent to you?
  • Data collected and stored at enormous speeds
    (Gbyte/hour)
  • remote sensor on a satellite
  • telescope scanning the skies
  • microarrays generating gene expression data
  • scientific simulations generating terabytes of
    data
  • NSA analysis of telecommunications

21
Why powerful computers are parallel
22
Tunnel Vision by Experts
  • I think there is a world market for maybe five
    computers.
  • Thomas Watson, chairman of IBM, 1943.
  • There is no reason for any individual to have a
    computer in their home
  • Ken Olson, president and founder of Digital
    Equipment Corporation, 1977.
  • 640K of memory ought to be enough for
    anybody.
  • Bill Gates, chairman of Microsoft,1981.

Slide source Warfield et al.
23
Technology Trends Microprocessor Capacity
Moores Law
2X transistors/Chip Every 1.5 years Called
Moores Law
Gordon Moore (co-founder of Intel) predicted in
1965 that the transistor density of semiconductor
chips would double roughly every 18 months.
Microprocessors have become smaller, denser, and
more powerful.
Slide source Jack Dongarra
24
Impact of Device Shrinkage
  • What happens when the feature size (transistor
    size) shrinks by a factor of x ?
  • Clock rate goes up by x because wires are shorter
  • actually less than x, because of power
    consumption
  • Transistors per unit area goes up by x2
  • Die size also tends to increase
  • typically another factor of x
  • Raw computing power of the chip goes up by x4 !
  • of which x3 is devoted either to parallelism or
    locality

25
Microprocessor Transistors per Chip
  • Growth in transistors per chip
  • Increase in clock rate

26
But there are limiting forces Increased cost and
difficulty of manufacturing
  • Moores 2nd law (Rocks law)

Demo of 0.06 micron CMOS
27
More Limits How fast can a serial computer be?
1 Tflop/s, 1 Tbyte sequential machine
r 0.3 mm
  • Consider the 1 Tflop/s sequential machine
  • Data must travel some distance, r, to get from
    memory to CPU.
  • To get 1 data element per cycle, this means 1012
    times per second at the speed of light, c 3x108
    m/s. Thus r lt c/1012 0.3 mm.
  • Now put 1 Tbyte of storage in a 0.3 mm x 0.3 mm
    area
  • Each word occupies about 3 square Angstroms
    (10-20m2), or the size of a small atom.
  • No choice but parallelism

28
Performance on Linpack Benchmark
www.top500.org
Gflops
Nov 2004 IBM Blue Gene L, 70.7 Tflops Rmax
29
Why writing (fast) parallel programs is
hard?And why we limit ourselves to sequential
programming in this course. -)
30
Principles of Parallel Computing
  • Finding enough parallelism (Amdahls Law)
  • Granularity
  • Locality
  • Load balance
  • Coordination and synchronization
  • Performance modeling

All of these things makes parallel programming
even harder than sequential programming.
31
Automatic Parallelism in Modern Machines
  • Bit level parallelism
  • within floating point operations, etc.
  • Instruction level parallelism (ILP)
  • multiple instructions execute per clock cycle
  • Memory system parallelism
  • overlap of memory operations with computation
  • OS parallelism
  • multiple jobs run in parallel on commodity SMPs

Limits to all of these -- for very high
performance, need user to identify, schedule and
coordinate parallel tasks
32
Finding Enough Parallelism
  • Suppose only part of an application seems
    parallel
  • Amdahls law
  • let s be the fraction of work done sequentially,
    so (1-s) is
    fraction parallelizable
  • P number of processors

Speedup(P) Time(1)/Time(P)
lt 1/(s (1-s)/P) lt 1/s
  • Even if the parallel part speeds up perfectly may
    be limited by the sequential part

33
Amdahls Law and how it really was
  • It is true that Amdahl pointed out this
    bottle-neck
  • In 1967
  • He was director of Advanced Computing Systems Lab
    of IBM
  • His observation was that programs spend 40 of
    time on OS tasks
  • OS are hard to parallelize
  • There are several projects, anyway
  • Our observation today is most parallel programs
    are dominated by computation
  • OS tasks are usually less important in these
    applications
  • Hard-to-parallelize problems are often avoided
  • Only crazy people would for instance parallelize
    graph algorithms
  • File in and output can be a serious bottle-neck
  • Parallel I/O helps a lot with this
  • Data-intensive computing emphasizes on these
    issues

34
Overhead of Parallelism
  • Given enough parallel work, this is the biggest
    barrier to getting desired speedup
  • Parallelism overheads include
  • cost of starting a thread or process
  • cost of communicating shared data
  • cost of synchronizing
  • extra (redundant) computation
  • Each of these can be in the range of milliseconds
    (millions of flops) on some systems
  • Tradeoff Algorithm needs sufficiently large
    units of work to run fast in parallel (I.e. large
    granularity), but not so large that there is not
    enough parallel work

35
Locality and Parallelism
Conventional Storage Hierarchy
Proc
Proc
Proc
Cache
Cache
Cache
L2 Cache
L2 Cache
L2 Cache
L3 Cache
L3 Cache
L3 Cache
potential interconnects
Memory
Memory
Memory
  • Large memories are slow, fast memories are small
  • Storage hierarchies are large and fast on average
  • Parallel processors, collectively, have large,
    fast
  • the slow accesses to remote data we call
    communication
  • Algorithm should do most work on local data

36
Processor-DRAM Gap (latency)
µProc 60/yr.
1000
CPU
Moores Law
100
Processor-Memory Performance Gap(grows 50 /
year)
Performance
10
DRAM 7/yr.
DRAM
1
1989
1980
1981
1983
1984
1985
1986
1987
1988
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
1982
Time
37
Load Imbalance
  • Load imbalance is the time that some processors
    in the system are idle due to
  • insufficient parallelism (during that phase)
  • unequal size tasks
  • Examples of the latter
  • adapting to interesting parts of a domain
  • tree-structured computations
  • fundamentally unstructured problems
  • Algorithm needs to balance load

38
MeasuringPerformance
39
Improving Real Performance
  • Peak Performance grows exponentially, à la
    Moores Law
  • In 1990s, peak performance increased 100x in
    2000s, it will increase 1000x
  • But efficiency (the performance relative to the
    hardware peak) has declined
  • was 40-50 on the vector supercomputers of 1990s
  • now as little as 5-10 on parallel supercomputers
    of today
  • Close the gap through ...
  • Mathematical methods and algorithms that achieve
    high performance on a single processor and scale
    to thousands of processors
  • More efficient programming models and tools for
    massively parallel supercomputers

1,000
Peak Performance
100
Performance Gap
Teraflops
10
1
Real Performance
0.1
2000
2004
1996
40
Performance Levels
  • Peak advertised performance (PAP)
  • You cant possibly compute faster than this speed
  • Speed that computer companies guarantee not to
    exceed.
  • Speed of computer without programs and data
  • LINPACK
  • The hello world program for parallel computing
  • Solve Axb using Gaussian Elimination, highly
    tuned
  • Gordon Bell Prize winning applications
    performance
  • The right application/algorithm/platform
    combination plus years of work
  • Average sustained applications performance
  • What one reasonable can expect for standard
    applications
  • When reporting performance results, these levels
    are often confused, even in reviewed publications

41
Performance on Linpack Benchmark
www.top500.org
Gflops
Nov 2004 IBM Blue Gene L, 70.7 Tflops Rmax
42
Performance Levels (for example on NERSC-3)
  • Peak advertised performance (PAP) 5 Tflop/s
  • LINPACK (TPP) 3.05 Tflop/s
  • Gordon Bell Prize winning applications
    performance 2.46 Tflop/s
  • Material Science application at SC01
  • Average sustained applications performance 0.4
    Tflop/s
  • Less than 10 peak!

43
What Characterizes a Good SC Application?
44
What require fast SC applications?
  • An efficient algorithm
  • A performing implementation
  • A scaling parallelization (if parallel computers
    used)
  • Which is the most important?

45
Example Computing p by integration
  • Consider upper right quarter of a unit circle
  • Which has area p/4
  • Y is given by trigonometry y(x) (1-x2)1/2
  • p 0 ?1 4(1-x2)1/2 dx

y
x
46
Example Computing p by integration
  • Another way
  • arctan(1) p/4, arctan(0) 0
  • arctan(x) 1 / (1x2)
  • Thus p 4 (arctan(1) - arctan(0)) 0 ?1 4 /
    (1x2 ) dx
  • This is a very popular introduction example for
    parallel computing
  • The intervals in the integral can be computed
    separately
  • Only communication is to sum the partial results
  • It scales to any number of processors without
    much lost
  • E.g. for 1 million processors it would take
    1/million of time
  • Lets program both

47
Source program for p by integration
  • ?include ltiostreamgt
  • include ltmath.hgt
  • // Adapted from MPI tutorial
  • int main(int argc, char argv)
  • int n, myid, numprocs, i
  • double PI25DT 3.141592653589793238462643
  • double pi, pi2, h, sum, sum2, x
  • while (1)
  • if (myid 0)
  • printf("Enter the number of
    intervals (0 quits) ")
  • scanf("d",n)
  • if (n 0)
  • break
  • else

48
Results
  • gt pi
  • Enter the number of intervals (0 quits) 10
  • pi with arctan' is approximately
    3.1424259850010987, Error is 0.0008333314113056
  • pi with circle is approximately
    3.1524114332616446, Error is 0.0108187796718515
  • Enter the number of intervals (0 quits) 100
  • pi with arctan' is approximately
    3.1416009869231254, Error is 0.0000083333333323
  • pi with circle is approximately
    3.1419368579000082, Error is 0.0003442043102151
  • Enter the number of intervals (0 quits) 200
  • pi with arctan' is approximately
    3.1415947369231252, Error is 0.0000020833333321
  • pi with circle is approximately
    3.1417143893448611, Error is 0.0001217357550680
  • Enter the number of intervals (0 quits) 1000
  • pi with arctan' is approximately
    3.1415927369231227, Error is 0.0000000833333296
  • pi with circle is approximately
    3.1416035449129063, Error is 0.0000108913231132
  • Enter the number of intervals (0 quits) 3000
  • pi with arctan' is approximately
    3.1415926628490589, Error is 0.0000000092592658
  • pi with circle is approximately
    3.1415947497204164, Error is 0.0000020961306233

49
Analysis of the Results
  • The arctan integration adds 2 correct digits for
    increasing the number of intervals by factor of
    10
  • The circle integration is even worse
  • Both converge logarithmically
  • That means the error decreases proportional to
    the logarithm of the compute effort
  • Inversely we need 10 times more time to get 2
    digits
  • Or 10 times more processors
  • Because it parallelizes perfectly
  • Ergo the example is okay as a parallel
    programming exercise but not for serious research

50
Funny C program
  • ?d,emain(b)int a1e4,ca,fafor(b--d/b2-1)
    b?ddb(e?fb2)a,fbd(b2-1)
  • printf(".4d",ed/a,eda,bc-20)
  • Simple program in 121 characters from Darren
    Smith
  • Prints accurately 1993 digits of
  • Almost standard conform
  • Doesnt really computes p but produces the
    sequence in a tricky way
  • How long would the integration program need for
    it?
  • With the largest computer in the world?
  • Almost 101000 iterations
  • Universe is only 13.7 billion years 4.32 1017s
    old (/- 6 1015s)
  • The number of atoms is guessed between 1078 and
    1081

51
Quantitative Guesses on Compute Time
  • Fast implementation can accelerate up to 1 order
    of magnitude
  • Occasionally even more
  • Some techniques are very hardware dependent
  • Other techniques enlarge the programs
    dramatically (esp. with old languages)
  • Parallelization can accelerate up to 4-5 o.o.m.
  • If one can afford such large systems
  • Also requires more programming efforts
  • Algorithmic modifications can even change the
    complexity
  • Thus, the ratio can be arbitrary
  • Example on p computation was an extreme case
  • Occasionally slightly slower algorithms are
    useful if their better implementability
    compensates the extra operations
  • In general, fast algorithms are more important
    than efficient programs

52
How NOT to do Scientific Computing
  • Only looking at algorithms
  • Dont bother if they are implementable on real
    computers
  • Only looking at performance
  • No matter what you compute as long as you get
    enough operations per second
  • For instance, using dense matrices instead of
    sparse can be much faster
  • But if 95 or gt99 only zeros are multiplied it
    is still waste of time (and memory)
  • It really happens and people impress with their
    performance (not everybody)
  • Looking only at parallel speed-up
  • Sometimes slow algorithms or implementations are
    used if they have better parallelism
  • Low single-processor performance improves speed-up

53
Resuming compute time
  • Best combination of algorithm, performance and
    parallelism is searched
  • Implies compromises on some of these properties
  • Realistic development costs can imply further
    compromises
  • Accuracy of results may exclude some techniques
  • Even if they are so nicely fast
Write a Comment
User Comments (0)
About PowerShow.com