The Nordic Grid Infrastructure A Grid within the Grid Early Experiences of Bridging National eBorder - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

The Nordic Grid Infrastructure A Grid within the Grid Early Experiences of Bridging National eBorder

Description:

Information Collection and Accounting (UK) Resource Brokering (Italy) Quality Assurance (France) ... NorduGrid software (ARC) development will continue in the ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 34
Provided by: Fab168
Category:

less

Transcript and Presenter's Notes

Title: The Nordic Grid Infrastructure A Grid within the Grid Early Experiences of Bridging National eBorder


1
The Nordic Grid Infrastructure A Grid within
the GridEarly Experiences of Bridging National
eBorders
Conference xxx - August 2003
Anders Ynnerman Director, Swedish National
Infrastructure for Computing Linköping
University Sweden
2
Outline of presentation
  • TYPES of GRIDs
  • Some GRID efforts in the Nordic Region
  • Nordic participation in EGEE
  • SweGrid testbed for production
  • NorduGrid - ARC
  • North European Grid
  • Nordic DataGrid Facility
  • Identifying potential problems for Nordic Grid
    collaborations
  • Proposed solutions

3
GRID-Vision
  • Hardware, networks and middleware are used to put
    together a virtual computer resource
  • Users should not have to know where computation
    is taking place or where data is stored
  • Users will work together over disciplinary and
    geographical borders and form virtual
    organizations

The best path to levels of complexity and
confusion never previously reached in the history
of computing
4
Flat GRIDSETI_at_HOME
GRID
5
Collaborative GRIDUK eScience
GRID
User
User
Resources
Resources
6
Power plant GRIDDEISA, SweGrid,
GRID
HPC-center
HPC-center
HPC-center
User
User
HPC-center
HPC-center
7
Hierarchical GRIDEDG/EGEE
GRID
Management
Regional center
Regional center
Local resource
Local resource
Local resource
Local resource
User
User
8
The EGEE project Enabling Grids for E-science in
Europe
9
EGEE Strategy
  • Leverage current and planned national and
    regional Grid programmes, building on the
    results of existing projects such as DataGrid and
    others
  • Build on the EU Research Network Geant and work
    closely with relevant industrial Grid developers
    and NRENs
  • Support Grid computing needs common to the
    different communities, integrate the computing
    infrastructures and agree on common access
    policies
  • Exploit International connections (US and AP)
  • Provide interoperability with other major Grid
    initiatives such as the US NSF Cyberinfrastructure
    , establishing a worldwide Grid infrastructure

10
EGEE Partners
  • Leverage national resources in a more effective
    way for broader European benefit
  • 70 leading institutions in 27 countries,
    federated in regional Grids

11
EGEE Activities
24 Joint Research
28 Networking
JRA1 Middleware Engineering and
Integration JRA2 Quality Assurance JRA3
Security JRA4 Network Services Development
NA1 Management NA2 Dissemination and
Outreach NA3 User Training and Education NA4
Application Identification and Support NA5
Policy and International Cooperation
Emphasis in EGEE is on operating a
production grid and supporting the end-users
48 Services
SA1 Grid Operations, Support and Management SA2
Network Resource Provision
12
EGEE Service Activity (I)
  • 1 Operations Management Centre OMC
  • Coordinator for CICs and for ROCs
  • Team to oversee operations problems resolved,
    performance targets, etc.
  • Operations Advisory Group to advise on policy
    issues, etc.
  • 5 Core Infrastructure Centres CIC
  • Day-to-day operation management implement
    operational policies defined by OMC
  • Monitor state, initiate corrective actions,
    eventual 24x7 operation of grid infrastructure
  • Provide resource and usage accounting, security
    incident response coordination, ensure recovery
    procedures
  • 11 Regional Operations Centres ROC
  • Provide front-line support to users and resource
    centres
  • Support new resource centres joining EGEE in the
    regions

13
The Northern Region ROC
  • Joint operation between SARA (Netherlands) and
    Swedish Infrastructure for Computing (SNIC)
  • Collaboration body formed in North European Grid
    Cluster (NEG)
  • SNIC-ROC is responsible for Sweden, Norway,
    Finland, Denmark and Estonia
  • Operation
  • Negotiate service level agreements (SLA) with
    committed resource centres (RC) in the Nordic
    countries and Estonia
  • Deploy and support EGEE grid middleware -
    includes documentation and training
  • Monitor and support 24/7 operation of Grid
    resources
  • Ensure collaboration with other Grid initiatives
    in the region - Nordic Data Grid, Nordugrid,
    Swegrid, NorGrid, CSC,

14
EGEE Service Activity (II)
Month 1 10
Month 15 20
  • Resource Centers

15
EGEE Middleware Activity
  • Hardening and re-engineering of existing
    middleware functionality, leveraging the
    experience of partners
  • Activity concentrated in few major centers
  • Key services Resource Access
  • Data Management (CERN)
  • Information Collection and Accounting (UK)
  • Resource Brokering (Italy)
  • Quality Assurance (France)
  • Grid Security (NEG)
  • Middleware Integration (CERN)
  • Middleware Testing (CERN)

16
EGEE Implementation Plans
  • Initial service will be based on the LCG
    infrastructure (this will be the production
    service, most resources allocated here)
  • CERN experiments and life sciences will provide
    pilot applications
  • Experiments form virtual organizations
  • As project evolves more application interfaces
    will be developed and EGEE will take on its role
    as a general infrastructure for E-science

17
EGEE Status
  • First meeting held in Cork 300 participants
  • Project management board appointed
  • Anders Ynnerman representing Northern Federation
  • Dieter Krantzmueller elected first chair
  • Routines for time reports are being set up
  • ROC recruitment is under way

18
SweGrid production testbed
  • Initiative from
  • All HPC-centers in Sweden
  • IT-researchers wanting to research Grid
    technology
  • Users
  • Life Science
  • Earth Sciences
  • Space Astro Physics
  • High energy physics
  • PC-clusters with large storage capacity
  • Build for GRID production
  • Participation in international collaborations
  • LCG
  • EGEE
  • NorduGrid

19
SweGrid subprojects
2.5 MEuro 6 PC-clusters 600 CPUs for
throughput computing
0.25 MEuro/year 6 Technicians Forming the core
team for the Northern EGEE ROC
  • 0.25 MEuro/year
  • Portals
  • Databases
  • Security
  • Globus Alliance
  • EGEE - security

20
SweGrid
  • Total budget 3.6 MEuro
  • 6 GRID nodes
  • 600 CPUs
  • IA-32, 1 processor/server
  • 875P with 800 MHz FSB and dual memory busses
  • 2.8 GHz Intel P4
  • 2 Gbyte
  • Gigabit Ethernet
  • 12 TByte temporary storage
  • FibreChannel for bandwidth
  • 14 x 146 GByte 10000 rpm
  • 120 TByte nearline storage
  • 60TByte disk
  • 60 TByte tape
  • 1 Gigabit direct connection to SUNET (10 Gbps)

21
SUNET connectivity
GigaSunet 10 Gbit/s
Typical POP at Univ.
The snowman topology
22
Persistent storage on SweGrid?
1
2
3
Bandwidth Availability
Size Administration
23
Observations
  • Global user identity
  • Each SweGrid users must receive a unique
    x509-certifikat
  • All centers must agree on a common lowest level
    of security. This will affect general security
    policy for HPC centers.
  • Unified support organization
  • All helpdesk activities and other support needs
    to be coordinated between centers. Users can not
    decide where their jobs will be run (should not)
    and expect the same level of service at all sites.
  • More bandwidth is needed
  • To be able to move data between the nodes in
    SweGrid before and after execution of jobs
    continuously increasing bandwidth will be needed
  • More storage is needed
  • Users can despite increasing bandwidth not fetch
    all data back home. Storage for both temporary
    and permanent data will be needed in close
    proximity to processor capacity

24
Bluesmoke_at_NSC
25
SweGrid status
  • All nodes installed during January 2004
  • Near line storage systems are currently being
    installed
  • Extensive use of the resources already
  • Local batch queues
  • GRID queues through the NorduGrid middlware
  • 60 users
  • Contributing to Atlas Datachallenge 2
  • As a partner in NorduGrid

26
The NorduGrid project
  • Started in January 2001 funded by NorduNet-2
  • Initial goal to deploy DataGrid middleware to
    run ATLAS Data Challenge
  • NorduGrid essentials
  • Built on GT-2
  • Replaces some Globus core services and introduces
    some new services
  • Grid-manager, Gridftp, User interface Broker,
    information model, Monitoring
  • Track record
  • Used in the ATLAS DC tests in May 2002
  • Chosen as the middleware for SweGrid
  • Is Currently being used in ATLAS DC II
  • Continuation
  • Could be included in the framework of the Nordic
    Data Grid Facility
  • Middleware renamed to ARC

27
The NorduGrid philosophy
  • No single point of failure
  • Resource owners have full control of the
    contributed resources
  • Installation details should not be dictated
  • Method, OS version, configuration, etc.
  • As little restriction on site configuration as
    possible
  • Computing nodes should not be required to be on
    the public network
  • Clusters need not be dedicated
  • NorduGrid software should be able to use existing
    system and Globus installation
  • Patched Globus RPMs are provided
  • Start with something simple that works and
    proceed from there

28
The resources
  • Currently available resources
  • 4 dedicated test clusters (3-4 CPUs)
  • Some junkyard-class second-hand clusters (4 to 80
    CPUs)
  • SweGrid
  • Few university production-class facilities (20 to
    60 CPUs)
  • Two world-class clusters in Sweden, listed in
    Top500 (230 and 440 CPUs)
  • Other resources come and go
  • Canada, Japan test set-ups
  • CERN, Russia clients
  • Its open, anybody can join or part
  • People
  • the core team grew to 7 persons
  • local sysadmins are only called up when users
    need an upgrade

29
A NorduGrid snapshot
30
Reflections on NorduGrid
  • Bottom up project driven by an application
    motivated group of talented people
  • Middleware adaptation and development has
    followed a flexible and minimally invasive
    approach
  • HPC centers are currently connecting large
    resources since it is good PR for the centers
  • As soon as NorduGrid usage of these resources
    increases they will be disconnected. There is no
    such thing as free cycles!
  • Motivation of resource allocations is missing
    no Authorization
  • NorduGrid lacks an approved procedure for
    resource allocation to VOs and individual user
    groups based on scientific reviews of proposals

31
Challenges
Users Country 2
Users Country 4
Users Country 1
Users Country 3
Current HPC setup
Funding agency country 1
HPC-center
HPC-center
HPC-center
HPC-center
Funding agency country 2
Funding agency country 3
Funding agency country 4
32
Other GRIDs
Users
VO 3
Proposals
VO 1
GRID management
VO 2
Accounting
Authorization
Middleware
MoU
SLAs
33
Nordic Status
  • A large number (too large) of GRID initiatives in
    the Nordic Region
  • HPC centers and GRID initiatives are rapidly
    merging
  • Strong need for a mechanism for exchange of
    resources over the borders
  • Very few existing users belong to VOs
  • Most cycles at HPC-centers are used for chemistry
  • How will these users become GRID users?
  • There is a need for authorities that grant
    resources to projects (VOs)
  • Locally
  • Regionally
  • Nationally
  • Internationally

34
Resource allocation
  • The granularity of the GRID resources reflects
    the granularity of the funding mechanisms
  • Authorization of usage is based on funding
    granularity
  • Solutions
  • Flat resource allocations
  • Supernational allocations based on international
    agreements
  • Hierarchical allocations (DEISA)
  • All solutions require accounting and pricing of
    resources
  • Need for Grid Economy
  • Several different solutions exist in theory!

35
GRID-Initiatives
2003
2004
2005
2006
2007
NGC
SNIC
SweGRID Testbed
?
Nordic GRID Facility
NOS-N
NorduGrid
CERN LCG/EGEE
36
Nordic Possibilities
  • Well defined region
  • Several different Grid initiatives
  • SweGrid, NorGrid, DanGrid, FinGrid,
  • Similar projects
  • Portals
  • National storage
  • National Authentication
  • Collaboration between funding agencies already
    exists
  • Limited number of people involved in all efforts
  • Cultural similarities
  • Nordic way is to shortcut administration and get
    down to business ?
  • Expansion to the Baltic new member states poses
    interesting challenges

37
The Nordic DataGrid Facility
  • Based on a Working paper by the NOS-N working
    group
  • The collaborative board for the Nordic research
    councils
  • Project started 2003-01-01
  • Builds on interest from
  • Biomedical sciences
  • Earth Sciences
  • Space and astro sciences
  • High energy physics
  • Gradual build-up and planing
  • Originally planned to be one physical facility
  • Project is currently undergoing changes

38
The Nordic Grid
  • Most likely
  • NDGF will be built as a Grid of Grids
  • Common services and projects will be identified
  • Will serve as a testbed for Grid Economic models
  • NorduGrid software (ARC) development will
    continue in the Framework of NDGF
  • The name will change to NorduGrid?
  • In this way the Nordic countries will have one
    interface to the outside world to the benefit of
  • LCG
  • EGEE

39
Conclusions
  • Nordic countries are early adopters of GRID
    technology
  • There are several national GRIDs
  • The NorduGrid middleware (ARC) has successfully
    been deployed
  • A Nordic GRID is currently being planned
  • Nordic level Authentication, Authorization,
    Accounting
  • Nordic policy framework
  • Nordic interface to other projects
Write a Comment
User Comments (0)
About PowerShow.com