Takashi Sasaki - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Takashi Sasaki

Description:

GRID was very smoothly introduced and enpowered by HEP ... Siemens MAGNETOM Sonata Maestro Class 1.5 T. Acquisition. Control PC. DICOM. push DICOM ... – PowerPoint PPT presentation

Number of Views:48
Avg rating:3.0/5.0
Slides: 28
Provided by: Dominique87
Category:
Tags: sasaki | sonata | takashi

less

Transcript and Presenter's Notes

Title: Takashi Sasaki


1
GRID Interoperability in HEP
Representing the collaboration between CC-IN2P2
and KEK-CRC Takashi Sasaki KEK Computing
Research Center
NEGST workshop Tokyo 2007
This presentation is based on Dominiques
presentation at FJPPL2007 workshop
2
GRID in HEP
  • Projects in HEP are really international
  • A kind of virtual organization existed since long
    ago
  • MoU based resource sharing
  • GRID was very smoothly introduced and enpowered
    by HEP
  • LHC (Large Hardon Collider) at CERN boosted the
    development and usage of GRID technologies
  • e.g. gLite
  • LCG(LHC Computing GRID) and OSG (Open Science
    GRID) are the main middleware used in this field
  • LCGgLite HEP special

3
The W-LCG grid model
W-LCG is the framework for the operation of LHC
computing
4
W-LCG is based on LCG and OSG interoperability
KEK
This is our hope!
5
GRIDs
  • The Global Grid Forum web site is mentioning 45
    major different grid organizations
  • W-LCG is federating resources from 3 different
    Grids
  • EGEE (Europe Asia Canada)
  • OSG (US)
  • NorduGrid (Nordic countries)
  • NAREGI (Japan ?? )
  • Having 1 single grid is unrealistic
  • One has to deal with several Grids and provide
    interoperability
  • From the user point of view
  • From the sites' point of view

NAREGI (National Research Grid Initiative) is
developed in Japan
6
Project presentation
  • This project is proposing to work toward Grid
    Interoperability
  • Work will concentrate mainly on
  • EGEE / NAREGI interoperability
  • Crucial for ILC
  • Will become important for LHC
  • NAREGI is a huge effort in Japan, and will
    certainly become a piece of the W-LCG
    organization (I hope)
  • SRB / iRODS data grid (see later)
  • Implementation
  • Development
  • Interoperability with EGEE / NAREGI

NAREGI 2003-2007 10 billion Yen Extended up to
2009
7
People concerned in the project
  • KEK
  • S. Kawabata
  • T. Sasaki
  • G. Iwai
  • K. Murakami
  • Y. Iida
  • CC-IN2P3
  • D. Boutigny
  • S. Reynaud
  • F. Hernandez
  • J.Y. Nief

8
Activities during the first year
  • 2 workshops were held in conjunction with AIL
    (Associated International Laboratory) between
    France and Japan
  • 3 days in September 2006 in Lyon 6 people from
    Japan
  • 4 days in February 2007 at KEK 5 people from
    France

9
NAREGI at KEK
  • KEK installed and is maintaining NAREGI middleware

NAREGI is based on Grid standards defined at the
OGF (Open Grid Forum)
  • NAREGI beta-1 released in May 2006 and installed
    at KEK
  • With some help from the NAREGI support team for
    the last stage
  • Compute part (6 nodes)
  • Data grid part (3 nodes)

Tested at KEK with P152 (heavy ions) and Belle
simulation Experience with middleware being built
up
10
Next steps
  • Release of NAREGI b2 in this summer
  • Expectations
  • Easy installation by apt-rpm
  • Stable and Robust middleware
  • Interoperation with EGEE/gLite
  • Job submission, Data exchange, Information
  • Customization of UI / VO portal with NAREGI Web
    service interface
  • Various useful features for application
  • GridFTP-APT automatic parallel tuning for
    GridFTP
  • GridMPI MPI jobs linking over sites
  • Bust job submission
  • Try to install LHC data processing or simulation
    on NAREGI
  • To prove usefulness even in HEP application
  • Release of NAREGI v. 1.0 in April-May 2008
  • Expand NAREGI sites to Asia, EU and US in the
    collaboration for its complementary features

11
Interoperability between EGEE and NAREGI
  • 2 possible approaches
  • Implement the GIN (Grid Interoperability Now)
    layer in NAREGI
  • Defined by the GIN group from the OGF
  • Short term solution in order to get the
    Interoperability Now !
  • Pragmatic approach
  • Work with longer term standards defined within
    the OGF
  • Develop a Meta Scheduler compatible with many
    Grid implementations
  • Based on SAGA (Simple API for Grid Applications)
    "Instead of interfacing directly to Grid
    Services, the applications can so access basic
    Grid Capabilities with a simple, consistent and
    stable API"
  • and JSDL (Job Submission Description Language)

12
GIN
  • Pushed by KEK and others, NAREGI has done
    considerable efforts to implement the GIN layer
  • "Trying to identify islands of interoperation
    between production grids and grow those islands"

Developing an interoperation island with
EGEE Common framework for Authentication and VO
management Cross job submission between
gLite/EGEE and NAREGI Data transfer between gLite
and Gfarm Grid resource information service
around the world
From NAREGI presentation in 02/07
13
An example of interoperability
GIN-data Data Management and Movement Interoperab
ility between NAREGI and EGEE at the data level
Work also on GIN-auth GIN-jobs GIN-info
GIN-ops
EGEE
gLite Client
SRM Client
LFC (Metadata Server)
GridFTP Server
Gfarm Server
Storage
Storage
14
The SAGA / JSDL approach
  • This approach is being developed at CC-IN2P3
    (Sylvain Reynaud)
  • This interoperability tool is sharing several
    modules with a software layer which is being
    developed at CC-IN2P3 in order to easily
    interface our local batch system to any Grid
    Computing Element

15
job desc.
JSAGA
gLite plug-ins
Globus plug-ins
WMS
SRM
input data
GridFTP
gLite-CE
gLite-CE
WS-GRAM
WS-GRAM
firewall
16
Extensible via plug-ins
ELIS_at_ (Enterprise grid with Local Infrastructure
and Services for Applications)
17
Next steps on NAREGI / EGEE interoperability
  • Continue work on both directions GIN and SAGA /
    JSDL
  • Try cross job submission on both Grid middleware
  • Explore data exchange between NAREGI and EGEE

18
The Storage Resource Broker (SRB)
  • SRB is relatively light data grid system
    developed at SDSC
  • Considerable experience has been gained at KEK,
    SLAC, RAL and CC-IN2P3
  • Heavily used for BaBar data transfer since years
    (up to 5 TB/day)
  • Very interesting solution to store and share
    biomedical data (images)
  • Advantages
  • Easy and fast development of applications
  • Extensibility
  • Reliability
  • Easiness of administration

19
Biomedical applications using SRB
  • push DICOM

20
From SRB to iRODS
  • iRODS (iRule Oriented Data Systems ) is the SRB
    successor
  • CC-IN2P3 and KEK are both involved in iRODS
    developments and tests
  • Should bring many new functionalities

21
From SRB to iRODS
iRule Oriented Data Systems
Definition of rules and micro-services
Allows to fully customize the system in order to
adapt it to the application
22
SRB-DSI
Being developed by Iida-san SRB-DSI is a software
layer which allows the Grid world based on Globus
to interoperate with SRB
Globus world data transfer based on GridFTP SRB
based on its own protocol
Crucial to interoperate the LCG and SRB worlds
23
The SRB-DSI architecture
Scommand
edg-gridftp-ls
globus-url-copy
rac01
GridFTP
rsr01
rls09
rls10
SRB (GSI_AUTH)
SRB (GSI_AUTH)
SRB (GSI_AUTH)
MCAT
srbServer
srbServer
srbServer
POSIX I/O (LOCAL)
POSIX I/O (LOCAL)
HPSS API I/O
rls10-hpss-vfs
rsr01-ufs
rls09-hpss-hsi
External Disk Sub-sys
hpssfsd
HPSS API I/O
  • Each DSI accesse to the following SRB resource
  • gsiftp//rsr01 -gt rsr01-ufs
  • gsiftp//rls09 -gt rls09-hpss-hsi
  • gsiftp//rls10 -gt rls10-hpss-vfs

HPSS
24
Next step for SRB / LCG interoperability
  • In order to have LCG and SRB fully interoperable
    we need to develop an SRB / SRM interface
  • This will be a common area of work for CC-IN2P3
    and KEK in the near future
  • One of us will stay at CC-IN2P3 for 8month (in
    plan)
  • Then we will explore the possibility to make SRB
    an alternative for LCG storage
  • SRB / iRODS is probably a good candidate to store
    user's files Grid wide ? An idea to be explored

25
Networking
  • KEK-gtCCIN2P3
  • default setting
  • 110Mbps
  • the interface queue to 100000 and 8MB windows
    size
  • 300Mbps
  • 2PSPacer
  • 700Mbps
  • CCIN2P3-gtKEK
  • 500 Mbps with the best efforts
  • Why asymmetric?
  • Traffic pattern??
  • GNET1 will be tested

before
after
26
GRID application
  • HEP analysis
  • Medical applications
  • Many people in HEP are working also in medical
    fields
  • Hadron therapy
  • Medical imaging

27
Summary
  • GRID Interoperability
  • Middleware operability
  • LCG/gLite and NAREGI
  • Interoperability to existing data storages
  • SRB
  • SRB-DSI
  • SRM
  • iRODs
  • SRM
  • Network monitoring
  • Systematic monitoring in long range is necessary
Write a Comment
User Comments (0)
About PowerShow.com