Community Software Development with the Astrophysics Simulation Collaboratory PowerPoint PPT Presentation

presentation player overlay
About This Presentation
Transcript and Presenter's Notes

Title: Community Software Development with the Astrophysics Simulation Collaboratory


1
Community Software Development with the
Astrophysics Simulation Collaboratory
  • Authors
  • Gregor von Laszewski,
  • Michael Russell,
  • Ian Foster,
  • John Shalf,
  • Gabrielle Allen,
  • Greg Daues,
  • Jason Novotny,
  • Edward Seidel

Presenter Javier Munoz
Agnostic Ana Rodriguez
2
Outline
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

3
Outline
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

4
NSF Award Abstract -9979985
  • KDI An Astrophysics Simulation Collaboratory
  • Enabling Large Scale Simulations in Relativistic
    Astrophysics
  • NSF Org PHY Latest Amendment Date September 22,
    2003 Award Number 9979985 Award Instrument
    Standard Grant Program Manager Beverly K.
    BergerPHY DIVISION OF PHYSICSMPS DIRECT FOR
    MATHEMATICAL PHYSICAL SCIEN Start Date
    September 15, 1999 Expires August 31, 2004
    (Estimated) Expected
  • Total Amount 2,200,000.00 (Estimated)
  • Investigator
  • Wai-Mo Suen wms_at_wugrav.wustl.edu (Principal
    Investigator current)Ian Foster (Co-Principal
    Investigator current)Edward Seidel (Co-Principal
    Investigator current)Michael L. Norman
    (Co-Principal Investigator current)Manish
    Parashar (Co-Principal Investigator current)
  • Sponsor Washington UniversityONE BROOKINGS
    DRIVE, CAMPUS BOXSAINT LOUIS, MO 631304899
    314/889-5100 NSF Program 8877 KDI-COMPETITION

5
Astrophysics Simulation Collaboratory (ASC)
  • Astrophysics
  • General Theory of Relativity
  • Simulation
  • Numerical solution of Partial Differential
    Equations
  • Collaboratory
  • Infrastructure to support efforts to solve large
    complex problems by geographically distributed
    participants.

6
Tired of Middleware?
  • The ASC is a complete Application
  • BUT well talk middleware anyway

7
Focus

Application (70)
(30) Middleware
8
ASC Purpose
  • Community (VO)
  • Domain-specific components
  • Transparent access
  • Deployment services
  • Collaboration during execution
  • Steering of simulations
  • Multimedia streams

9
ASC Technologies Used
  • Cactus Framework
  • Application Server
  • Grid Tools

10
Cactus Simulations in the ASCportal (Agnostic 4)
www.ascportal.org
11
Outline
  • Introduce the ASC
  • Cactus Architecture (Agnostic 1)
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

12
Cactus
  • 1995 Original version
  • Paul Walker
  • Joan Masso
  • Edward Seidel
  • John Shalf
  • 1999 Cactus 4.0 Beta 1
  • Tom Goodale
  • Joan Masso
  • Gabrielle Allen
  • Gerd Lanfermann
  • John Shalf.

13
Why Cactus?
  • Parallelization Model
  • Easy to Grid-enable
  • Flexible
  • C and Fortran

14
Cactus
  • Modularity
  • New Equations (physics)
  • Efficient PDE solution (Numerical Analyst)
  • Improved distributed algorithm (CS)

15
Cactus
  • Building Blocks
  • Schedule
  • Driver
  • Flesh
  • Thorns
  • Arrangements
  • Toolkit

www.cactuscode.org
16
Scheduling (workflow)
  • Flesh invokes a driver to process the schedule

ww.cactuscode.org
17
Driver
  • Parallelizes the execution
  • Management of grid variables
  • storage
  • distribution
  • communication
  • Distributed memory model. 
  • section of the global grid,
  • Boundaries Physical or Internal
  • Each thorn is presented with a standard
    interface, independent of the driver.

18
Driver
  • PUGH (Parallel Unigrid Grid Hierarchy)
  • MPI
  • Uniform mesh spacing
  • Non-adaptive
  • Automatic grid decomposition
  • Manual decomposition
  • Number of processors in each direction
  • Number of grid points on each processor

19
Flesh
  • In general, thorns overload or register their
    capabilities with the Flesh, agreeing to provide
    a function with the correct interface

20
Thorn Anatomy
  • Cactus Configuration Language
  • Parameter Files (Input)
  • Configuration File
  • Interface
  • Schedule
  • Application Code
  • C
  • Fortran
  • Miscellaneous
  • Documentation
  • Make

ww.cactuscode.org
21
Thorn Files
  • Param.ccl
  • What are the parameters for my thorn?
  • What are their ranges? Are they steerable?
  • What parameters do I need from other thorns?
  • Which of my parameters should be available for
    other thorns?
  • Interface.ccl
  • What does my thorn do
  • What are my thorns grid variables?
  • What variables do I need from other thorns?
  • What variables am I going to make available to
    other thorns?
  • Timelevels
  • Ghostzones
  • Schedule.ccl
  • When and how should my thorns routines be run?
  • How do my routines fit in with routines from
    other thorns?
  • Which variables should be synchronized on exit?

22
Objectives
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

23
Finite Differencing Infinitesimal vs Small Delta
http//homepage.univie.ac.at/franz.vesely/cp_tut/n
ol2h/applets/HO.html
24
Finite Differencing Ghost Zones
  • The grid on each processor has an extra layer of
    grid-points (in blue) which are copies of those
    on the neighbor in that direction
  • After the calculation step the processors
    exchange these ghost zones to synchronize their
    states.

Cactus 4.0 Users Guide
25
Finite Differencing Time Levels
  • Similar to Ghost Zones for the time dimension
  • Cactus managed leads to optimization
  • Numerical differentiation algorithm dependent
  • Typically three

26
Finite Differencing Synchronization
  • Cost of parallelization
  • Network characteristics important
  • Transfer of 12MBytes per iteration
  • BUTthere is room for optimization

27
Objectives
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

28
Cactus at Work
  • Members of the Cactus and Globus projects after
    winning a Gordon Bell Prizes in high-performance
    computing for the work described in their paper
    Supporting Efficient Execution in Heterogeneous
    Distributed Computing Environments with Cactus
    and Globus

29
What did they do?
  • Scaled Out
  • Grid enabled four supercomputers
  • 249 GFlops ?
  • Efficiently
  • Scaling efficiency
  • 88 with 1140 CPUs
  • 63 with 1500 CPUs

30
Scaling Out Finite Differencing Solutions to
PDEs
  • Problem
  • Nodes with different types of Processors, Memory
    sizes
  • Heterogeneous Communications among processors.
  • Multiprocessors
  • LAN
  • WAN
  • Bandwidth, TCP, and Latency

31
Scaling Out in a Computational Grid
  • Strategies
  • Irregular data distribution
  • Grid-aware communication schedules
  • Redundant computation
  • Protocol tuning

32
Where are they now? (Agnostic 5,8)
  • Gabrielle Allen
  • Assistant Director for Computing Applications,
    Center for Computation Technology
  • Associate Professor, Department of Computer
    Science
  • Louisiana State University
  • Edward Seidel
  • Director, Center for Computation Technology
  • Floating Systems Professor, Departments of
    Physics and Computer Science
  • Louisiana State University

Visualization of Katrina developed at
CCT Application Frameworks for High Performance
and Grid Computing, G. Allen, E. Seidel, 2006.
http//www.cct.lsu.edu/gallen/Preprints/CS_Allen0
6b.pre.pdf
33
Outline
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

34
GridSphere
1st Grid Middleware Congress www.GridSphere.org
35
GridSphere (Agnostic 3)
  • Developed by the EU GridLab project
  • About 100,000 lines of code
  • Version 2.0
  • Framework based on
  • Grid Portal Development Kit (GPDK)
  • ASC Web Portal
  • Open source project
  • http//www.gridsphere.org
  • Framework for portlet development
  • Portlet Container

36
Ongoing Collaborations (Agnostic 10)
  • Cactus portal at Albert Einstein Institute
  • Interface to Cactus numerical relativity
    application / provide physicists with interface
    for launching jobs viewing results
  • Grid Portal at Canadian National Research Council
  • Provide controlled remote access to NMR
    spectroscopy instruments
  • GEON earth sciences portal / CHRONOS portal
  • Manage/visualize/analyze vast amount of
    geosciences data and large scale databases
  • Pgrade portal at SZTAKI Hungary U. Westminster
    UK
  • Creation, execution and monitoring of complex
    workflows

37
GridSphere
  • Core Portlets
  • Login/Logout
  • Role Based Access Control (RBAC) separating users
    into guests, users, admins, and super users
  • Account Request
  • Account Management
  • User Management
  • Portlet Subscription
  • Local File Manager
  • Notepad
  • Text Messaging

38
Action Portlets
  • Hides branching logic
  • Action and view methods to invoke for events
  • Provides default actionPerformed

39
Personalized Environment
"GridSpheres Grid Portlets A Grid Portal
Development Framework" Jason Novotny GridSphere
and Portlets workshop, March 2005, e-Science
Institute
40
Single Sign-On Capabilities
"GridSpheres Grid Portlets A Grid Portal
Development Framework" Jason Novotny GridSphere
and Portlets workshop, March 2005, e-Science
Institute
41
Perform File Transfers (Agnostic 2)
"GridSpheres Grid Portlets A Grid Portal
Development Framework" Jason Novotny GridSphere
and Portlets workshop, March 2005, e-Science
Institute
42
GridSphere
"GridSpheres Grid Portlets A Grid Portal
Development Framework" Jason Novotny GridSphere
and Portlets workshop, March 2005, e-Science
Institute
43
Submit Jobs
"GridSpheres Grid Portlets A Grid Portal
Development Framework" Jason Novotny GridSphere
and Portlets workshop, March 2005, e-Science
Institute
44
GridSphere
https//portal.cct.lsu.edu/gridsphere/gridsphere?c
idhome
45
Outline
  • Introduce the ASC
  • Cactus Architecture
  • Cactus Math
  • Cactus Scaling Out
  • Gridsphere
  • Agnostic Questions

46
Agnostic Questions
  • The ASC application server uses a relational
    database to maintain the state of sessions could
    it be implemented in any other way? Explain.
  • Sure, SQL is a de-facto standard The Gridsphere
    Container provides a Persistence Manager that
    uses open-source Castor libraries from Exolab
    which provides mechanisms for mapping objects to
    SQL and an object query language (OQL)
  • Using Castor, mappings from Java to SQL can be
    generated automatically

47
Agnostic Questions
  • Can you expand on the MDS browser developed by
    the Java CoG Kit?
  • Currently MDS4 uses XPATH instead of LDAP for
    query language.
  • Registration is performed via a Web service
  • Container built-in MDS-Index service
  • Aggregator services are dynamic
  • A Service can become Grid wide service index.
  • Source http//globus.org/toolkit/docs/4.0/key/GT4
    _Primer_0.6.pdf

48
Agnostic Questions
  • Could the collaboratory framework be implemented
    using technologies other than Java? If so, could
    it still be used in the same way? What would be
    the pros and cons of using Java technologies vs.
    other alternative technologies?
  • Cactus uses C and FORTRAN, but Web Portals are
    mainly being developed using Java
  • GridPort
  • GCE-RG
  • GPDK
  • GridSphere

49
Questions?
  • Fallacy?
  • Discipline scientists are typically not experts
    in distributed computing.
  • Cactus Developers have a background in
    Mathematics and Physics.
Write a Comment
User Comments (0)
About PowerShow.com