HPWREN and International Collaborations - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

HPWREN and International Collaborations

Description:

Cindy Zheng – PowerPoint PPT presentation

Number of Views:271
Avg rating:3.0/5.0
Slides: 60
Provided by: pragmagoc
Category:

less

Transcript and Presenter's Notes

Title: HPWREN and International Collaborations


1
Introduction to PRAGMA Grid
Cindy Zheng For PRAGMA Grid Team Pacific Rim
Application and Grid Middleware
Assembly http//www.pragma-grid.net http//goc.pra
gma-grid.net
2
Overview
  • PRAGMA
  • PRAGMA Grid
  • People
  • Hardware
  • Software
  • Operations
  • Education Grid
  • Grid Applications
  • Grid Middleware
  • Application middleware
  • Infrastructure middleware
  • Collaborations/Integrations
  • Grid Interoperations

Heterogeneity People Collaborations Integrations
Lessons learned
3
  • PRAGMA
  • A Practical Collaborative Framework
  • IOIT-VN
  • Strengthen Existing and Establish New
    Collaborations
  • Work with Science Teams to Advance Grid
    Technologies and Improve the Underlying
    Infrastructure
  • In the Pacific Rim and Globally
  • http//www.pragma-grid.net

4
Working Groups Organize Activities
Resources
Biosciences
  • Feb 2009

GEO
Telescience
5
PRAGMA Working Groups
  • Bioscience
  • Telescience
  • Geo-science
  • Resources and data
  • Grid middleware interoperability
  • Global grid usability and productivity
  • PRAGMA Grid effort is led by resources and data
    working group, but rely on collaborations and
    contributions among all working groups.

6
PRAGMA Grid
UZH Switzerland
UZH Switzerland
AIST OsakaU UTsukuba Japan
JLU China
NCSA USA
CNIC GUCAS China
KISTI Korea
BU USA
UUtah USA
LZU China
LZU China
SDSC USA
ASGC NCHC Taiwan
CUHK HKU HongKong
UPRM Puerto Rico
UoHyd India
CICESE Mexico
ASTI Philippines
UNAM Mexico
NECTEC ThaiGrid Thailand
HCMUT HUT IOIT-HCM Vietnam
CeNAT-ITCR Costa Rica
SKU UI Indonesia
MIMOS USM Malaysia
APAC QUT Australia
IHPC/NGO NTU Singapore
UChile Chile
MU Australia
BESTGrid New Zealand
29 institutions in 15 countries/regions, 25
compute sites ( 10 in preparation)
7
PRAGMA Grid Members and Teamhttp//goc.pragma-gri
d.net/wiki/index.php/Site_status_and_tasks
  • Sites
  • 22 sites from PRAGMA member institutions
  • 7 sites from Non-PRAGMA member institutions
  • 25 sites contributed compute clusters
  • Team members
  • 231 and growing
  • one management contact / site
  • 13 technical support contact / site
  • 14 application drivers / application
  • 15 members / Middleware development team

8
PRAGMA Grid Compute Resourceshttp//goc.pragma-gr
id.net/pragma-doc/computegrid.html
9
Characteristics of PRAGMA Grid
  • Grass-root
  • Voluntary contribution
  • Open (PRAGMA member or not, pacific rim or not)
  • Long-term collaborative working experiment
  • Heterogeneous
  • Funding
  • No uniform infrastructure management
  • Variety of sciences and applications
  • Site policies, system and network environments
  • Realistically tough
  • Good for development, collaborations,
    integrations and testing

10
PRAGMA Grid Software Layershttp//goc.pragma-grid
.net/pragma-doc/userguide/join.html
Applications
Phylogenetic

FMO
CSTFT
Savannah
MM5
AMBER
Siesta
Application Middleware
Infrastructure Middleware
Ninf-G
Nimrod/G
Mpich-GX

Gfarm
SCMSWeb
MOGAS
CSF

Globus (required)
Local job scheduler (require one)
SGE
PBS
LSF
SQMS

11
PRAGMA Grid Operations
12
Grid Operationhttp//goc.pragma-grid.net,
http//wiki.pragma-grid.net
  • Develop and maintain mutual beneficial and happy
    relationships among all people involved
  • Geographies, time-zones, languages
  • Funding, chain-of-command, priorities
  • Mutual benefit, consensus, active leadership
  • Coordinator, site contacts
  • Collaboration tools
  • Mailing lists, VTCs, Skype, semi-annual workshops
  • Grid Operation Center (GOC)
  • Wiki, all sites and application, middleware teams
    collaborate
  • Heterogeneity
  • Tolerate, technology, overcome and take advantage
  • Software inventory instead of software stack
  • Many sub-grids for applications
  • Recommendation instead of requirements
  • Software license (Amber grid-wide license)

13
Create New Ways To Operate http//goc.pragma-grid
.net, http//wiki.pragma-grid.net
  • Lack precedence
  • Everyone contributes ideas, suggestions
  • Evolving and improving over time
  • Everyone document and update (wiki)
  • Create new procedures
  • New site setup to join PRAGMA Grid
  • http//goc.pragma-grid.net/pragma-doc/userguide/jo
    in.html
  • New user/application to run in PRAGMA grid
  • http//goc.pragma-grid.net/pragma-doc/userguide/pr
    agma_user_guide.html
  • Tabulate information
  • Application pages, site pages, resources tables,
    status pages
  • Publish instructions
  • Software deployment procedures, tools

14
Education Grid
  • PRIME - Pacific Rim Undergraduate Experiences,
    providing UCSD undergraduate students
    international interdisciplinary research
    internships and Cultural experiences, in
    collaboration with PRAGMA since 2004.
    http//prime.ucsd.edu
  • PRIUS - Pacific Rim International UniverSity,
    provide Osaka University students expert lectures
    and internship abroad, in collaboration with
    PRAGMA since 2005 http//prius.ics.es.osaka-u.ac.j
    p/en/index.html
  • MURPA Monash Undergraduate Research Projects
    Abroad, provides Monash undergraduate students an
    8-week summer international research opportunity
    at University of California, San Diego (UCSD).
  • http//www.infotech.monash.edu.au/about/events/200
    8/murpa.html
  • Sample middleware projects
  • MOGAS
  • Grid security analysis
  • Virtualization
  • Sample applications ran in PRAGMA grid
  • Climate modeling
  • Multi-walled carbon nanotube and polyethylene
    oxide composite computer visualization model
  • Metabolic regulation of ionic currents and pumps
    in rabbit ventricular myocyte model
  • Improving binding energy using quantum mechanics
  • Cardiac mechanics modeling
  • H5N1 simulation
  • Shp2 Protein Tyrosine Phosphatase Inhibitor
    simulation for cancer research

MURPA
15
PRIME Host and Mentor Sites Research
Apprenticeship Cultural Experience
Doshisha U NIICT Osaka U Japan
UWisc USA
U Zurich Switzerland
CNIC China
UCSD USA
AST NCHC NCREE NMMBA Taiwan
UoHyd India
USM Malaysia
U Auckland U Waikato New Zealand
Monash U Australia
  • 2004 2007 There are 5 host sites Osaka, NCHC,
    Monash, CNIC NCREE
  • New in 2008 USM, U Auckland, U Waikato, New 2009
    U Hyderabad National Institute for Information
    and Communication Technology Doshisha
    University Academia Sinica, NMMBA

16
PRIUS and MURPA Based on PRIMENew Models for
Building Research Capacity and Cultural Awareness
  • Osaka
  • University
  • 12 Oct 05
  • MURPA
  • Monash U
  • 31 Jul 08

17
Application Driven
18
Applications http//goc.pragma-grid.net/wiki/inde
x.php/Applications
  • Climate simulation
  • Savannah/Nimrod (MU, Australia)
  • Quantum Mechanics
  • TDDFT,QMMD,FMO/Ninf-G (AIST, Japan)
  • Structure biology
  • Phaser/Nimrod (MU, Australia)
  • Drug analysis
  • EMSAM (PRIME)
  • Bio-medical research
  • Cardiac output (PRIME)
  • Cardial mechanics (PRIME)
  • Ventricular Myocyte model (PRIME)
  • Genomics and meta-genomics
  • Avian Flu Grid/CSF (SDSC/CNIC/JLU/UTsukuba/)
  • Computational fluid dynamics
  • e-AIRS (KISTI, Korea)
  • Environmental Science
  • CSTFT/Ninf-G (UPRM, Puerto Rico)
  • Braizilian Regional Atmospheric modelling
    (UChile, Chile)

19
Lessons Learned From Running Applications
  • PRAGMA grid resources enables large-scale
    computation
  • Its heterogeneous environment
  • is great for
  • Testing
  • Collaborating
  • Integrating
  • Sharing
  • Not easy more need to be done
  • Middleware needs improvements
  • Work in heterogeneous environment
  • Fault tolerance
  • Need user friendly portals and services
  • Automate and integrate
  • Information collections (grid monitoring,
    workflow)
  • Decisions and executions (scheduling)
  • Domain specific easy user interfaces (portals, CE
    tools)

20
Grid Middleware
21
Application Middleware
22
Ninf-G http//ninf.apgrid.org
  • Developed by AIST, Japan
  • Based on GridRPC model
  • Support parallel computing
  • Integrated to NMI release 8 (first non-US
    software in NMI)
  • Integrate with Rocks
  • OGF standard
  • 4 applications ran in PRAGMA grid, 2 ran across
    multi-grid
  • TDDFT (quantum chemistry)
  • QM/MD (quantum mechanics)
  • FMO (molecular dynamics)
  • CSTFT (sensor data analysis)
  • Achieved long runs (50 days)
  • Improved fault-tolerance
  • Simplified deployment procedures
  • Speed-up development cycles

23
Nimrod/Ghttp//www.csse.monash.edu.au/davida/nim
rod
  • Developed by Monash University (MU), Australia
  • Supports large scale parameter sweeps on Grid
    infrastructure
  • Easy user interface Nimrod portals
  • MU, Australia
  • UZurich, Switzerland
  • UCSD, USA
  • 4 applications ran in PRAGMA grid and 1 runs in
    multi-grids
  • Savanah climate simulation (MU, Australia))
  • GAMESS/APBS (UZurich, Switzerland)
  • Siesta (UZurich, Switzerland)
  • Structure biology (MU, Australia)
  • CeNAT-ITCR, Costa Rica and UNAM, Mexico are
    starting applications using Nimrod in PRAGMA grid
  • Developed interface to Unicore
  • Achieved long runs (90 different scenarios of 6
    weeks each
  • Improved fault-tolerance (innovate time_step)
  • Enhancements in data and storage handling

Description of Parameters PLAN FILE
24
Mpich-Gxhttp//www.moredream.org/mpich.htm
  • Mpich-GX
  • Korea Institute of Science and Technology
    Information (KISTI), Korea
  • Based on Mpich-g2
  • Grid-enabled MPI, support
  • Private IP
  • Fault tolerance
  • MM5 and WRF
  • CICESE, Mexico
  • Medium scale atmospheric simulation model
  • Experiment
  • KGrid
  • WRF work well with MPICH-GX
  • MM5 experienced scaling problems with MPICH-GX
    when use more than 24 processors in a cluster
  • Functionality of the private IP is usable
  • Performance of the private IP is reasonable

25
MM5-WRF/Mpich-GX Experiment
Hurricane Marty Simulation
Mpich-GX
Private IP support
Fault Tolerance support
Santana Winds Simulation
KGrid
output
USA
SDSC
CICESE Ensenada
México
eolo
pluto
26
Infrastructure Middleware
27
SCMSWebhttp//www.opensce.org/components/SCMSWeb
  • Developed by Kasetsart University and ThaiGrid
  • Web-based real-time grid monitoring system
  • System usage, Job/queue status
  • Probe Globus authentication, job submission,
    gridftp, Gfarm access,
  • Network bandwidth measurements with Iperf
  • PRAGMA grid geo map
  • Support Linux, Solaris. Good meta-view, easy user
    interface, excellent user support
  • Develop and test in PRAGMA grid
  • Deployed in 27 sites, improve scalability and
    performance
  • Sites help with porting to ia64 and Solaris
  • Demands push fast expansion of functionalities
  • More regional/national grids learned and adopted

28
SCMSWeb New Features
  • Automatic email alert deployed and operational
  • According to probe failure status
  • 3 times a week
  • Email to site admins and Cindy Zheng
  • Bi-directional bandwidth measurements using iPerf
  • Deployed on 11 systems
  • Need to investigate problems with some sites in
    one direction
  • Need to deploy to all systems
  • Software catalog
  • Implemented for 7 software so far
  • Amber, APBS, AutoDock, NAMD
  • Ninf-G
  • Intel-C, Intel-Fortran
  • Deployed on 11 systems
  • Need to deploy to all systems
  • Add more software as needed

29
MOGAShttp//ntu-cg.ntu.edu.sg/pragma/index.jsp
  • Multi-Organization Grid Accounting System (MOGAS)
  • Lead by NanYang University, funded by National
    Grid Office in Singapore
  • Build on globus core (gridftp, GRAM, GSI)
  • Support GT2,3,4 SGE, PBS
  • Job/user/cluster/OU/grid levels usages job logs
    metering and charging tools
  • Develop and test in PRAGMA grid
  • Deployed on 14 sites different GT versions, job
    schedulers, GRAM scripts, security policies
  • Feedbacks, improve, automate deployment procedure
  • MOGAS-2 with improved database performance
  • Collaborations and integrations with applications
    and other middleware teams push the development
    of easy database interface

30
Gfarm Grid File Systemhttp//datafarm.apgrid.org
  • AIST, UTsukuba, Open source development at
    SourceForge.net
  • Grid file system that Federates storage of each
    site
  • Meta-server keeps track of file copies and
    locations
  • Can be mounted from cluster nodes and clients
    (GfarmFS-FUSE)
  • Parallel I/O, near site copy for scalable
    performance
  • Replication for fault tolerance
  • Use GSI authentication
  • Easy application deployment, file sharing

31
Gfarm version 2 update
  • Version 2.0.0 released on 28 Nov, 2007
  • First release
  • Version 2.1.0 released on 27 May, 2008
  • GSI support
  • File replica management
  • Group management, disk usage report
  • Version 2.1.1 released on 27 Sep, 2008
  • Hardlink support
  • On-demand replication
  • Version 2.2.0 will be released really soon
  • Symbolic link support
  • Hundreds of clients support
  • Directory listing speedup

32
Develop and Test GfarmFS-FUSE in PRAGMA
Gridhttp//goc.pragma-grid.net/wiki/index.php/Res
ources_and_Data
  • Testing with applications
  • Igap and Avian Flu Grid (UTsukuba, Japan, UCSD,
    USA JLU, China)
  • Huge number of small files
  • High meta-data access overhead
  • Meta-data cache server
  • Dramatic improvements (44sec -gt 3.54sec)
  • AMBER (USM, Malaysia UTsukuba, Japan)
  • Remote Gfarm meta-server
  • Meta-server is bottle-neck
  • File sharing permission, security
  • 2.0 improved performance
  • Use as a shared storage only
  • Version 1.x works well in local or regional grid
  • GeoGrid, Japan
  • CLGrid, Chile
  • Integration
  • SCMSWeb (ThaiGrid, Thailand)
  • Rocks (SDSC, USA UZH, Switzerland)

33
CSF4http//goc.pragma-grid.net/wiki/index.php/CSF
_server_and_portal
  • Community Scheduler Framework, v4
    meta-scheduler
  • Developed by Jilin University, China
  • Grid services host in GT4, WSRF compliant,
    execution Component in Globus Toolkit 4
  • Open Source, http//sourceforge.net/projects/gcsf
  • Support GT24, LSF, PBS, SGE, Condor
  • Easy user interface - portal
  • Testing and collaborating in PRAGMA
  • Testing with application iGAP (UCSD, AIST, KISTI,
    )
  • Collaborate and integrate with Gfarm on data
    staging (AIST, Japan)
  • Setup a CSF server and portal (SDSC, USA)
  • Collaborate/integrate with SCMSWeb for resource
    information (Thaigrid, Thailand)
  • Leverage resources and global grid testing
    environment

34
Date Turbine
  • DataTurbine is an open source, Java based network
    ring buffer for all sorts of data. You can use
    memory disk for the ring and it runs on almost
    any JVM.
  • Sources can have multiple channels with varied
    types - numeric (e.g. sensors), video, audio,
    text, binary blobs.
  • Data acquisition (DAQ)
  • National Instruments (NI-DAQ, DAQmx, Compact RIO)
    via Java proxy
  • Campbell Scientific File-based, via LoggerNet,
    up to 1Hz tested.
  • Dataq Instruments (serial connect via C
    DaqToRbnb)
  • PUCK, (Programmable Underwater Connector with
    Knowledge)
  • Seabird/Seacat
  • Vaisala weather station
  • Video and still cameras
  • Anything with motion JPEG via URLAxis,
    Panasonic, etc
  • Still images via WebDAV, HttpMonitor
  • Accelerometers
  • ADXL202 and Apple laptop
  • Interfaces
  • Primary interface is via the Java-based API
  • Other avenues
  • If youre on Windows, theres ActiveX
  • TCP/UDP proxy (some code required)

SAN DIEGO SUPERCOMPUTER CENTER, UCSD
35
Example - NCHC Kenting (Taiwan)
  • Kenting National Park and Yuan-Yang Lake,
    pictures from Fang-Pang Lin and Ebbe Strandell

36
(No Transcript)
37
Rocks-based Virtual Machine
  • Developed by Rocks group in SDSC
  • Based on Xen
  • Decouples OS/application from hardware
  • Increase utilization
  • Increase security
  • Key advantage
  • Solving application requirement conflicts
  • Allow user system-level access
  • Grow and shrink virtual cluster
  • Requirements
  • A lot of memory
  • Will test in PRAGMA grid
  • Can create virtual machines
  • Can create virtual clusters

38
ScienceTechnologiesCollaborationsIntegration
s
39
Collaborations With Science and Technology Teams
  • Grid security
  • Naregi (Japan), APGrid, GAMA (SDSC, USA)
  • Grid infrastructure
  • Monitoring - SCMSWeb (ThaiGrid, Thailand)
  • Accounting - MOGAS (NTU Singapore)
  • Metascheduling - Community Scheduler Framework
    (JLU, China)
  • Cyber-environment - CSE-Online (UUtah, USA)
  • Rocks, middleware and VM (SDSC, USA )
  • Ninf-G, SCE, Gfarm, Bio, KRocks, Condor,
  • Virtual machine
  • Science, datagrid, sensor, network
  • Biosciences Avian Flu, portal,
  • Gfarm-fuse (AIST, Japan)
  • GEON data network
  • GLEON sensor network
  • OptIPuter
  • High performance networked TDW
  • Telescience

40
Grid Security
  • Trust in PRAGMA grid, http//goc.pragma-grid.net/p
    ragma-doc/certificates.html
  • IGTF distribution
  • Non-IGTF distribution (trust all PRAGMA Grid
    sites)
  • APGrid PMA
  • One of three IGTF founding PMAs
  • Many PRAGMA grid sites are members
  • PRAGMA CA
  • Naregi-CA
  • AIST, UCSD, UChile, UoHyd, UPRM
  • PRAGMA CA (experimental and production)
  • Based on Naregi-CA
  • Catch-all CA for PRAGMA
  • Production CA is IGTF compliant
  • MyProxy and VOMS services
  • APAC
  • Work with GAMA
  • Integrate with Naregi-CA (Naregi, UCSD)
  • Integration with VOMS (AIST)
  • Add servelet for account management (UChile)
  • Lessons learned
  • Leverage resources, setups and expertise
  • Balance and consider both security and easy
    access and use
  • Get more user communities involved with grid
    security

41
PRAGMA-CA and VOMS
  • PRAGMA-UCSD CA (https//goc.pragma-grid.net/ca)
  • Built according to the most current APGrid
    recommendations and guidelines
  • Naregi-CA developed new version
  • Setup at SDSC
  • Accredited by APGrid PMA in April, 2008
  • Included in IGTF distribution in July, 2008
  • VOMS (http//goc.pragma-grid.net/wiki/index.php/VO
    MRS)
  • Setup VOMRS server at SDSC
  • Focused on group mapping to local account
  • VOMS client site
  • GUMS VO group to unix account mapping
  • PRIMA interface globus to GUMS
  • Auth-tools enable individual user account
    mapping
  • VOMS client enable user choose a mapping
  • GSISSH enable user access with globus
    certificate

42
SCMSWeb Collaborations and Integrations
  • Grid Interoperation Now (GIN, OGF)
    http//forge.gridforum.org/sf/wiki/do/viewPage/pro
    jects.gin/wiki/GinOps
  • Worked with PRAGMA grid, TeraGrid, OSG, NorduGrid
    and EGEE on GIN testbed monitoring
    http//goc.pragma-grid.net/cgi-bin/scmsweb/probe.c
    gi, added probes to handle various grid service
    configurations/tests.
  • Worked with CERN and Implemented a XML-gt LDIF
    translator for GIN geo map http//maps.google.com/
    maps?qhttp//lfield.home.cern.ch/lfield/gin.kml
  • Worked with many grid monitor software developers
    on a common schema for cross-grid monitoring
    http//wiki.pragma-grid.net/index.php?titleGIN_2
    8Grid_Inter-operation_Now29_Monitoring
  • Software integration and interoperations
  • Rocks SCE roll
  • MOGAS grid accounting
  • Condor, CSF, provide resource info
  • Things are being worked on and planned
  • Data federator for grid applications
  • Provide site software information
  • Standardize data extractions and formats
  • Improve data storage with RDBMS
  • Interoperate with other monitoring software
  • Ganglia support

43
SCMSWeb-Condor Interfacehttp//goc.pragma-grid.ne
t/wiki/index.php/Condor-PRAGMA_Interoperation
  • SCMSWeb interface Condor-G
  • SCMSWeb provides system info, Condor-g dispatchs
    jobs accordingly
  • Collaboration between PRAGMA and Condor
  • ThaiGrid
  • Project lead
  • Interface development work
  • SDSC
  • Coordination
  • Resource support
  • KISTI
  • Application testing
  • Condor
  • Interface development support
  • Current Status
  • Running on rocks-153.sdsc.edu
  • Successfully tested with application
  • Working on improving performance and
    fault-tolerance
  • Submitted a paper to an IEEE conference

44
Collaborations with OptIPuter, GLIF and
CAMERAhttp//www.optiputer.net
  • OptIPuter (Optical networking, Internet Protocol,
    computer storage, processing and visualization
    technologies)
  • Infrastructure that will tightly couple
    computational resources over parallel optical
    networks using the IP communication mechanism
  • central architectural element is optical
    networking, not computers
  • enable scientists who are generating terabytes
    and petabytes of data to interactively visualize,
    analyze, and correlate their data from multiple
    storage sites connected to optical networks
  • Rocks/SAGE VIS-roll (SDSC)
  • Networked Tile Display Walls (TDW)
  • Low cost
  • For research collaboration
  • For remote education and conferencing
  • Deployed at many PRAGMA grid sites

45
Build a Rocks / SAGE OptIPortal
UZurich
CNIC
NCHC
Osaka U
46
Global Lambda Integrated Facility
(GLIF)http//www.glif.is
Visualization courtesy of Bob Patterson, NCSA.
  • Map to many PRAGMA grid sites
  • PRAGMA grid use GLIF to solve grid application
    bandwidth problem

47
Intergrate CAMERA and PRAGMA Grid Microbial
Metagenomicist Userbase
Over 1300 Registered Users From 48 Countries
48
Grid Interoperation
49
Grid Interoperation Now (GIN)http//forge.gridfor
um.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOp
s
  • Open Grid Forum and GIN
  • GIN-OPS (lead by PRAGMA)
  • GIN testbed (February, 2006 On going)
  • One or more clusters from each grid
  • Still part of each production grid
  • Running real science applications
  • Explore interoperation issues
  • Develop solutions
  • Provide insight to standardization effort
  • Application driven
  • TDDFT/Ninf-G (PRAGMA - AIST, Japan)
  • PRAGMA, TeraGrid, OSG, NorduGrid EGEE
  • Savanah fire simulation (PRAGMA Monash
    University, Australia)
  • PRAGMA, TeraGrid, OSG

50
Grid Interoperation Now (GIN)http//forge.gridfor
um.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOp
s
  • Software interface and integration
  • Ninf-G (AIST/PRAGMA) - NorduGrid
  • Nimrod/G (MU-PRIME/PRAGMA) Unicore
  • SCMSWeb (ThaiGrid/PRAGMA) Condor (UWisc/OSG)
  • SCMSWeb (ThaiGrid/PRAGMA) BDII (CERN)
  • VDT (OSG) and Rocks (SDSC/PRAGMA) integration
  • Multi-Grid monitoring
  • Lead by ThaiGrid/PRAGMA
  • SCMSWeb probe matrix (PRAGMA - ThaiGrid,
    Thailand)
  • Common schema (http//goc.pragma-grid.net/wiki/ind
    ex.php?titleGIN_28Grid_Inter-operation_Now29_Mo
    nitoring)
  • PRAGMA SCMSWeb, MOGAS
  • TeraGrid Globus Gt4.0.1, Ganglia, NAGIOS
  • EGEE MonAlisa
  • NorduGrid/ARC NorduGrid/MDS2, NorduGrid/Grid
    Monitor

51
(No Transcript)
52
Peer-grid Interoperation Experimentshttp//goc.pr
agma-grid.net/wiki/index.php/Main_PageGrid_Inter-
operations
  • Different from GIN testbed
  • More resources and support from each grid
  • Either uni-directional or bi-directional
    application run
  • Long run to achieve scientific results
  • OSGlt-gtPRAGMA (January, 2007 on-going)
  • How
  • Each grid identify management, application
    drivers, resources supporters
  • All participants document application
    requirements, meetings, issues, solutions,
    status, results, at wiki.pragma-grid.net
  • Resources
  • OSG FermilabGrid, will add UWisc
  • PRAGMA grid - any sites application driver choose
    to use
  • Applications
  • OSG GISolve, spatial Interpolation (UIowa, USA)
  • PRAGMA
  • FMO/Ninf-G, quantum Chemistry (AIST, Japan)
    completed
  • Structure biology (MU, Australia) start soon

53
OSG-PRAGMA Grid Interoperation Experimentshttp//
goc.pragma-grid.net/wiki/index.php/Main_PageGrid_
Inter-operations
  • More resources and support from each grid, but no
    special arrangements
  • Application long-run
  • GridFMO/Ninf-G Large scale quantum Chemistry
    (Tsutomo Ikegami, AIST, Japan)
  • 240 CPUs from OSG and PRAGMA grid, 10 days x 7
    calculations
  • Fault-tolerance enabled long-run
  • Meaningful and usable scientific results

54
Grid Interoperation Experimentshttp//goc.pragma-
grid.net/wiki/index.php/Run_Phaser_on_PRAGMA_grid_
and_OSG
  • Phaser/Nimrod-G structural biology
  • Large run July-September
  • 71,000 jobs, each
  • Wall clock time a few hours a few days
  • Memory 0.2 2 GB
  • Total CPU time 511,000 hours
  • Total CPUs 1200, 20 discrete resources
  • Australian enterpriseGrid, Monash, APAC Grid
  • PRAGMA Grid
  • OSG (FermilabGrid, RENCI ltVOgt)
  • Submitted a paper to an IEEE conference

55
Lessons Learned From Grid Interoperation
http//forge.gridforum.org/sf/wiki/do/viewPage/pro
jects.gin/wiki/GinOps
  • Grid interoperation make large scale calculations
    possible
  • Differences among grids provide learning,
    collaboration and integration opportunities
  • IGTF, VOMS (GIN)
  • Common Software Area (TeraGrid)
  • Ninf-G NorduGrid
  • Nimrod/G Unicore
  • SCMSWeb Condor
  • SCMSWeb BDII
  • SCMSWeb probe matrix for GIN testbed monitoring
  • Common schema among many grid monitoring software
  • VDT (OSG) and Rocks (SDSC/PRAGMA) integration
  • Differences in grid environment are source of
    difficulties for users and applications
  • Different user access setup procedure - take
    extra effort
  • Different job submission protocols
  • GRAM, Sandbox, gridftp, modified GRAM,
  • One-to-one interface - is it scalable? Possible
    standards?
  • Middleware fault tolerance and flexible resource
    management is important
  • Cope with unfamiliar fault conditions, lack of
    parallel computation support,

56
Collaborate in Publishing Research Results
  • Some published papers in 2007
  • Amaro, RE, Minh DDL, Cheng LS, Lindstrom, WM Jr,
    Olson AJ, Lin JH, Li WW, and McCammon JA.
    Remarkable Loop Flexibility in Avian Influenza N1
    and Its Implications for Antiviral Drug Design.
    J. AM. CHEM. SOC. 2007, 129, 7764-7765 (PRIME)
  • Choi Y, Jung S, Kim D, Lee J, Jeong K, Lim SB,
    Heo D, Hwang S, and Byeon OH."Glyco-MGrid A
    Collaborative Molecular Simulation Grid for
    e-Glycomics," in 3rd IEEE International
    Conference on e-Science and Grid Computing,
    Banglore, India, 2007. Accepted.
  • Ding Z, Wei W, Luo Y, Ma D, Arzberger PW, and Li
    WW, "Customized Plug-in Modules in Metascheduler
    CSF4 for Life Sciences Applications," New
    Generation Computing, p. In Press, 2007.
  • Ding Z, Wei S, Ma, D and Li WW, "VJM -- A
    Deadlock Free Resource Co-allocation Model for
    Cross Domain Parallel Jobs," in HPC Asia 2007,
    Seoul, Korea, 2007, p. In Press.
  • Görgen K, Lynch H, Abramson D, Beringer J and
    Uotila P. "Savanna fires increase monsoon
    rainfall as simulated using a distributed
    computing environment", to appear, Geophysical
    Research Letters.
  • Ichikawa K, Date S, Krishnan S, Li W, Nakata K,
    Yonezawa Y, Nakamura H, and Shimojo S, "Opal OP
    An extensible Grid-enabling wrapping approach
    for legacy applications", GCA2007 - Proceedings
    of the 3rd workshop on Grid Computing
    Applications -, pp.117-127 , Singapore, June 2007
    a. (PRIUS)
  • Ichikawa K, Date S, and Shimojo S. A Framework
    for Meta-Scheduling WSRF Based Services,
    Proceedings of 2007 IEEE Pacific Rim Conference
    on Communications, Computers and Signal
    Processing (PACRIM 2007), Victoria, Canada, pp.
    481-484, Aug. 2007 b. (PRIUS)
  • Kuwabara S, Ichikawa K, Date S, and Shimojo S. A
    Built-in Application Control Module for SAGE,
    Proceedings of 2007 IEEE Pacific Rim Conference
    on Communications, Computers and Signal
    Processing (PACRIM 2007), Victoria, Canada, pp.
    117-120, Aug. 2007. (PRIUS)
  • Takeda S, Date S, Zhang J, Lee BU, and Shimojo S.
    Security Monitoring Extension For MOGAS,
    GCA2007 - Proceedings of the 3rd workshop on Grid
    Computing Applications - , pp.128-137
    Singapore, June 2007. (PRIUS)
  • Tilak S, Hubbard P, Miller M, and Fountain T,
    The Ring Buffer Network Bus (RBNB) DataTurbine
    Streaming Data Middleware for Environmental
    Observing Systems," to appear in the Proceedings
    of the e-Science 2007
  • Zheng C, Katz M, Papadopoulos P, Abramson D,
    Ayyub S, Enticott C, Garic S, Goscinski W,
    Arzberger P, Lee B S, Phatanapherom S,
    Sriprayoonsakul S, Uthayopas P, Tanaka Y,
    Tanimura Y, Tatebe O. Lesson Learned Through
    Driving Science Applications in the PRAGMA Grid.
    Int. J. Web and Grid Servies, Vol.3, No.3,
    pp287-312. 2007

57
Summary
  • PRAGMA grid
  • Shared vision lower resistance to use others
    software, test on others resources
  • Formed new development collaborations
  • Size and heterogeneity, explore issues which
    functional grid must resolve
  • Management, resources and software coordination
  • Identity and fault management
  • Scalability and performance
  • Feedback between application and middleware help
    improve software and promote software integration
  • Heterogeneous global grid
  • Is realistic and challenging
  • Can be good for middleware development and
    testing
  • Can be useful for real science
  • Impact
  • Software dissemination (Rocks, Ninf-G, Nimrod,
    SCMSWeb, Naregi-CA, )
  • Help new national/regional grids (Chile, Vietnam,
    Hong kong, )
  • Key is people, is collaboration

58
A Grass Roots Effort
  • One of the most important lessons of the
    Internet is that it grows most successfully where
    grass roots initiatives are encouraged and
    enabled. The Internet has historically grown from
    the bottom up, and this aspect continues to fuel
    its continued growth in the academic and
    commercial sectors.
  • Vint Cert, UN Economic and Social Council in 2000

59
  • PRAGMA is supported by the National Science
    Foundation (Grant No. INT-0216895, INT-0314015,
    OCI -0627026) and by member institutions
  • PRIME is supported by the National Science
    Foundation under NSF INT 04007508
  • PRAGMA grid is the result of contributions and
    support from all PRAGMA grid team members

Thank You
http//www.pragma-grid.net http//goc.pragma-grid.
net http//wiki.pragma-grid.net
Write a Comment
User Comments (0)
About PowerShow.com