Using the Common Component Architecture to design parallel scientific codes - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Using the Common Component Architecture to design parallel scientific codes

Description:

Typically, a monolithic codes, hand-tooled by a few people ... Ben Allan (High. Perf. Comp.) Kylene Smith. Looking for collaborators. CS. Phys/Engg/Biophys. ... – PowerPoint PPT presentation

Number of Views:46
Avg rating:3.0/5.0
Slides: 36
Provided by: jar58
Category:

less

Transcript and Presenter's Notes

Title: Using the Common Component Architecture to design parallel scientific codes


1
Using the Common Component Architecture to
design parallel scientific codes
  • Jaideep Ray
  • Sandia National Labs, Livermore.

2
Need for components in scientific computing
  • Typically, a monolithic codes, hand-tooled by a
    few people
  • Typically, no continuity or programming disciple
  • Hence incompetence/carelessness permeates and
    rules
  • Not very extensible.
  • Need modularity for extensibility and damage
    control.

3
Rudimentary forms of components
  • Abstracting functionality and interfaces quite
    common -- libraries like MPI, LAPACK, BLAS etc.
  • Why not a physics library ? (some exist, as a
    matter of fact).
  • Why not a mesh library ? (ditto)
  • And so why not a code assembled entirely out of
    libraries ?
  • Implement libraries as C objects with an
    agreed-to interface

4
Rudimentary forms of components (contd)
  • Component ! -- as long as these
    functionalities are peers.
  • Thus a simulation code an assembly of
    components
  • with agreed-to interfaces
  • facilitating swapping-in-and-out of similar
    components
  • Well-defined interfaces -gt inclusion of diverse
    components (viz, data-mapping etc)

5
Why components in the scientific arena ?
  • Components encapsulate a functionality e.g. mesh,
    time-integrator, turbulence model, linear solvers
    ...
  • Functionality can be specified with mathematical
    precision
  • In given code, functionalities are widely
    different - no question of derived classes - thus
    peer-components is a natural model.
  • Interfaces to most components are pretty
    straightforward except
  • for getting data in and out of components
  • Components provide a means of reusing old code
    (Fortran 66!!).

6
Components state of the art
  • CORBA, EJB
  • Objects with a certain functionality
  • Standalone compiled separately
  • Functionality through interfaces
  • Framework
  • To instantiate and assemble a code from
    components
  • Interfaces can proxy for a component on a remote
    machine
  • Requires all components to implement certain
    standardized interfaces for introspection.

7
Requirements for scientific computing
  • Need parallelism, NOT distributed components
  • High single-CPU performance
  • No prescription/model for message passing
  • No one-size-fits-all
  • No need for RMI SPMD quite sufficient
  • Low latency between method calls
  • Will sacrifice considerable generality for
    performance.

8
Common Component Architecture Model
  • DOE Labs, Univ. Utah, Univ Indiana
  • Uses/Provides model
  • Components have interfaces(Ports) ProvidesPorts
    provided by components for others to use.
  • Components uses others functionality by calling
    methods on their UsesPorts
  • UsesPorts and Provides ports need to be connected
  • Ports CAN proxy for remote components if needed
  • Components instantiated inside a framework
  • Connects Uses/Provides Ports driven by a script.
  • Components allow for introspection by framework.

9
CCA model for parallel computing
  • Identical frameworks with identical components
    and connection on P processors.
  • Comp. A on proc Q can call methods on Comp. B
    also on proc Q.
  • Comp. A s of all P procs communicate via MPI.
  • No RMI Comp. A on proc Q DOES NOT interact with
    Comp. B on proc N.
  • No parallel comp. Model the component does
    whats right.
  • 2 such frameworks Sandia, Utah.

10
Pictorial example
11
CCAFFEINE framework
  • C framework components are C objects
  • Just been changed to allow C/F77/etc components.
  • Objects/components implement functionality
    derived from abstract classes (Ports) 1 method
    to allow introspection by framework.
  • Components are compiled into shared object
    libraries.

12
CCAFFEINE (contd)
  • Framework driven by a script
  • Loads, instantiates components connects Uses and
    ProvidesPorts.
  • Components register themselves their uses and
    provides port with the framework

13
A CCA code
14
Summary
  • A lightweight component model for high
    performance computing.
  • A restriction on parallel communication
  • Comm. Only between a cohort of components.
  • No RMI no dist. computing.
  • Components with a physics / chemistry / numerical
    algo functionalities.
  • Standardized interfaces Ports.
  • Thats the theory does it work ?

15
Proof of usefulness
  • Real scientific applications
  • Component reuse
  • So 2 scientific apps.
  • Parallel, scalable, good single CPU performance
  • A formalism for decomposing a big code into
  • Subsystem
  • Components.
  • Dirty secrets / restrictions / flexibility.

16
Guidelines regarding apps
  • Hydrodynamics
  • P.D.E
  • Spatial derivatives
  • Finite differences, finite volumes
  • Timescales
  • Length scales

17
Solution strategy
  • Timescales
  • Explicit integration of slow ones
  • Implicit integration of fast ones
  • Strang-splitting

18
Solution strategy (contd)
  • Wide spectrum of length scales
  • Adaptive mesh refinement
  • Structured axis-aligned patches
  • GrACE.
  • Start with a uniform coarse mesh
  • Identify regions needing refinement, collate into
    rectangular patches
  • Impose finer mesh in patches
  • Recurse mesh hierarchy.

19
A mesh hierarchy
20
App 1. A reaction-diffusion system.
  • A coarse approx. to a flame.
  • H2-Air mixture ignition via 3 hot-spots
  • 9-species, 19 reactions, stiff chemistry
  • 1cm X 1cm domain, 100x100 coarse mesh, finest
    mesh 12.5 micron.
  • Timescales O(10ns) to O(10 microseconds)

21
App. 1 - the code
22
So, how much is new code ?
  • The mesh GrACE
  • Stiff-integrator CVODE, LLNL
  • ChemicalRates old Sandia F77 subroutines
  • Diff. Coeffs based on DRFM old Sandia F77
    library
  • The rest
  • We coded me and the gang.

23
Evolution
24
Details
  • H2O2 mass fraction profiles.

25
App. 2 shock-hydrodynamics
  • Shock hydrodynamics
  • Finite volume method (Godunov)

26
Interesting features
  • Shock interface are sharp discontinuities
  • Need refinement
  • Shock deposits vorticity a governing quantity
    for turbulence, mixing,
  • Insufficient refinement under predict
    vorticity, slower mixing/turbulence.

27
App 2. The code
28
Evolution
29
Convergence
30
Are components slow ?
  • C compilers ltlt Fortran compilers
  • Virtual pointer lookup overhead when accessing a
    derived class via a pointer to base class
  • Y F I - ?t/2 J ?Y H(Yn) G(Ym)
    used Cvode to solve this system
  • J G evaluation requires a call to a component
    (Chemistry mockup)
  • ?t changed to make convergence harder more J
    G evaluation
  • Results compared to plain C and cvode library

31
Components versus library
32
Scalability
  • Shock-hydro code
  • No refinement
  • 200 x 200 350 x 350 meshes
  • Cplant cluster
  • 400 MHz EV5 Alphas
  • 1 Gb/s Myrinet
  • Worst perf 73 scaling eff. For 200x200 on 48
    procs

33
Summary
  • Components, code
  • Very different physics/numerics by replacing
    physics components
  • Single cpu performance not harmed by
    componentization
  • Scalability no effect
  • Flexible, parallel, etc. etc.
  • Success story ?
  • Not so fast

34
Pros and cons
  • Cons
  • A set of components solve a PDE subject to a
    particular numerical scheme
  • Numerics decides the main subsystems of the
    component assembly
  • Variation on the main theme is easy
  • Too large a change and you have to recreate a big
    percentage of components
  • Pros
  • Physics components appear at the bottom of the
    hierarchy
  • Changing physics models is easy.
  • Note Adding new physics, if requiring a
    brand-new numerical algorithm is NOT trivial.
  • So whats a better design to accommodate this ?

35
The usual suspects .
  • Sophia Lefantzi (Comb. Res. Fac.)
  • Habib Najm (Comb. Res. Fac.)
  • Rob Armstrong (High Perf. Comp.)
  • Ben Allan (High. Perf. Comp.)
  • Kylene Smith
  • Looking for collaborators
  • CS
  • Phys/Engg/Biophys.
Write a Comment
User Comments (0)
About PowerShow.com