Title: What Programming Paradigms and algorithms for Petascale Scientific Computing, a Hierarchical Program
1What Programming Paradigms and algorithms for
Petascale Scientific Computing, a Hierarchical
Programming Methodology Tentative
Serge G. Petiton June 23rd, 2008
Japan-French Informatic Laboratory (JFIL)
2Outline
- Introduction
- Present Petaflops, on the Road to Future Exaflops
- Experimentations, toward models and
extrapolations - Conclusion
3Outline
- Introduction
- Present Petaflops, on the Road to Future Exaflops
- Experimentations, toward models and
extrapolations - Conclusion
4Introduction
- The Petaflop frontier was crossed (May 25-26
night) top500 - Sustained Petaflop would be soon available by a
large number of computers - As scheduled since the 9Oth, we didnt really
have large technological gaps to access Petaflops
computers - Language and tools are not so different since
first SMPs - What about languages, tools, methods for
sustained 10 Petaflops - Exaflop would probably ask for new technology
advancements and new ecosystems - On the road toward Exaflops, we would soon face
difficult challenges and we have to anticipate
new problems around the 10 Petaflop frontier.
5Outline
- Introduction
- Present Petaflops, on the Road to Future Exaflops
- Experimentations, toward models and
extrapolations - Conclusion
6Hyper Large Scale Hierarchical Distributed
Parallel Architectures
- Many-cores ask for new programming paradigm, as
data parallel, - Message passing would be efficient for gang of
cluster, - Workflow and Grid-like programming may be a
solution for the higher level programming, - Accelerators, vector computing,
- Energy consumption optimization,
- Optical networks,
- Inter and intra (chip, cluster, gang,.)
communications - Distributed/Shared Memory computer on a chip.
7On the road from Petaflop toward Exaflop
- Multi programming and execution paradigms,
- Technological and software challenge compilers,
systems, middleware, schedulers, fault
tolerance, - New applications and Numerical Methods,
- Arithmetic and elementary function (multiple and
hybrids) - Data distributed on networks and grids,
- Education challenges, we have to educate
scientists
8and the road would be dificult.
- Multi-level programming paradigms,
- Component Technologies,
- Mixed data migration and computing, with large
instrument control, - We have to use end-users expertise,
- Indeterminist distributed computing, component
dependence graph, - Middleware and Platform independent
- Time to solution minimization, new metrics
- We have to allow end-users to propose scheduler
assistance and to give some advice to anticipate
data migration data
9Outline
- Introduction
- Present Petaflops, on the Road to Future Exaflops
- Experimentations, toward models and
extrapolations - Conclusion
10YMLLanguage
Front end Depends only of the applications
Back end depends of middleware. Ex. XtermWeb
(F), OmniRPC (Jp), and Condor (USA).
http//yml.prism.uvsq.fr/
11Components/Tasks Graph Dependency
par compute tache1(..) signal(e1) //
compute tache2(..) migrate matrix(..)
signal(e2) // wait(e1 and e2) Par
compute tache3(..) signal(e3) //
compute tache4(..) signal(e4) //
compute tache5(..) control robot(..)
signal(e5) visualize mesh() end par //
wait(e3 and e4 and e5) compute tache6(..)
compute tache7(..) end par
Résultat A
Generic component node
Begin node
End node
Graph node
Dependence
12LAKe Library (Nahid Emad, UVSQ)
13YML/LAKe
14Block Gauss-Jordan, 101 processor Cluster, Grid
5000 YML versus YML/OmniRPC (with Maximes Hugues
(TOTAL and LIFL))
Time
Taille de bloc 1500
We optimize the Time to Solution Several
middleware may be choose
Number of Blocks
15GRID 5000, BGJ,10, 101 nuds, YML versus
YML/OmniRPC
Block sizes 1500
16BGJ, YML/OmniRPC versus YML
Block Size 1500
17Asynchronous Restarted Iterative Methods on
multi-node computers
With Guy Bergère, Zifan Li, and Ye Zhang (LIFL)
18Convergence on GRID 5000
Residual Norm
Time (second)
19One or two distributed sites, same number of
processors, communication overlay
One site
Two sites
20Cell/GPU CEA/DEN with Christophe Calvin et
Jérome Dubois (CEA/DEN Saclay)
- MINOS/APOLLO3 solver
- Netronic tranport problem
- Power Method to compute the dominante eigenvalue
- Slow convergence
- Large number of floating point operations
- Experimentations on
- Bi-Xeons quadcore 2.83GHz (45 GFlops)
- CellBlade (Cines Montpellier) (400 GFlops)
- GPU Quadro FX 4600 (240 GFlops)
21Matrix size
Power method Performances
21
22Difference
Power Method Arithmetic Accuracy
22
23Matrix Size
Arnoldi Projection Performances
23
24Orthogonalization degradation
Arnoldi Projection Arithmetic Accuracy
24
25Outline
- Introduction
- Present Petaflops, on the Road to Future Exaflops
- Experimentations, toward models and
extrapolations - Conclusion
26Conclusion
- We plan to extrpolate from Grid5000 and our
multi-core experimentations some behaviors of the
future hiearachical large petascale computers,
using YML for the higher level, - We need to propose new high-level languages to
program large Petaflop computers, to be able to
minimize Time to Solution and energy
consumptions, with system and middleware
independencies, MPI would probably very difficult
to dethrone, - Other important codes would be still carefully
hand-optimized - Several Programming paradigms, with respect to
the different level, have to me mixed. The
interface have to be well-specified MPI would
probably very difficult to dethrone, - End-users have to be able to give expertise to
help middleware management such as scheduling,
and to chose libraries - New Asynchronous Hybrid Methods have to be
introduced