title'open revolution execute - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

title'open revolution execute

Description:

... phy.bris.ac.uk/CN=Dave Newbold member Kate Mackay /O=Grid/O=UKHEP/OU=phy.bris.ac. ... James Perry. QCDGrid. 3. Applications. Tony Doyle - University of ... – PowerPoint PPT presentation

Number of Views:86
Avg rating:3.0/5.0
Slides: 40
Provided by: scotg
Category:

less

Transcript and Presenter's Notes

Title: title'open revolution execute


1
Tony Doyle a.doyle_at_physics.gla.ac.uk
GridPP Project Elements UK
e-Science All Hands Conference, Sheffield
3 September 2002
2
Outline GridPP Project Elements
  • Who are we?
  • Motivation
  • Overview
  • Project Map
  • CERN
  • DataGrid
  • Applications
  • Infrastructure
  • Interoperability
  • Dissemination
  • Finances
  • Achievements and Issues
  • Summary

3
Who are we?
Nick White /OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/C
NNick White member Roger Jones
/OGrid/OUKHEP/OUlancs.ac.uk/CNRoger Jones
member Sabah Salih /OGrid/OUKHEP/OUhep.man.ac.u
k/CNSabah Salih member Santanu Das
/OGrid/OUKHEP/OUhep.phy.cam.ac.uk/CNSantanu
Das member Tony Cass /OGrid/OCERN/OUcern.ch/CN
Tony Cass member David Kelsey /OGrid/OUKHEP/OUp
p.rl.ac.uk/CNDavid Kelsey member Henry Nebrensky
/OGrid/OUKHEP/OUbrunel.ac.uk/CNHenry
Nebrensky member Paul Kyberd /OGrid/OUKHEP/OUbr
unel.ac.uk/CNPaul Kyberd member Peter Hobson
/OGrid/OUKHEP/OUbrunel.ac.uk/CNPeter R Hobson
member Robin Middleton /OGrid/OUKHEP/OUpp.rl.ac
.uk/CNRobin Middleton member Alexander Holt
/OGrid/OUKHEP/OUph.ed.ac.uk/CNAlexander Holt
member Alasdair Earl /OGrid/OUKHEP/OUph.ed.ac.u
k/CNAlasdair Earl member Akram Khan
/OGrid/OUKHEP/OUph.ed.ac.uk/CNAkram Khan
member Stephen Burke /OGrid/OUKHEP/OUpp.rl.ac.u
k/CNStephen Burke member Paul Millar
/OGrid/OUKHEP/OUph.gla.ac.uk/CNPaul Millar
member Andy Parker /OGrid/OUKHEP/OUhep.phy.cam.
ac.uk/CNM.A.Parker member Neville Harnew
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNNeville
Harnew member Pete Watkins /OGrid/OUKHEP/OUph.b
ham.ac.uk/CNPeter Watkins member Owen Maroney
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNOwen Maroney
member Alex Finch /OGrid/OUKHEP/OUlancs.ac.uk/C
NAlex Finch member Antony Wilson
/OGrid/OUKHEP/OUpp.rl.ac.uk/CNAntony Wilson
member Tim Folkes /OGrid/OUKHEP/OUhepgrid.clrc.
ac.uk/CNTim Folkes member Stan Thompson
/OGrid/OUKHEP/OUph.gla.ac.uk/CNA. Stan
Thompson member Mark Hayes /OGrid/OUKHEP/OUamtp
.cam.ac.uk/CNMark Hayes member Todd Huffman
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNB. Todd
Huffman member Glenn Patrick /OGrid/OUKHEP/OUpp
.rl.ac.uk/CNG N Patrick member Pete Gronbech
/OGrid/OUKHEP/OUphysics.ox.ac.uk/CNPete
Gronbech member Nick Brook /OGrid/OUKHEP/OUphy.
bris.ac.uk/CNNick Brook member Marc Kelly
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNMarc Kelly
member Dave Newbold /OGrid/OUKHEP/OUphy.bris.ac
.uk/CNDave Newbold member Kate Mackay
/OGrid/OUKHEP/OUphy.bris.ac.uk/CNCatherine
Mackay member Girish Patel /OGrid/OUKHEP/OUph.l
iv.ac.uk/CNGirish D. Patel member David Martin
/OGrid/OUKHEP/OUph.gla.ac.uk/CNDavid J.
Martin member Peter Faulkner /OGrid/OUKHEP/OUph
.bham.ac.uk/CNPeter Faulkner member David Smith
/OGrid/OUKHEP/OUph.bham.ac.uk/CNDavid Smith
member Steve Traylen /OGrid/OUKHEP/OUhepgrid.cl
rc.ac.uk/CNSteve Traylen member Ruth Dixon del
Tufo /OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNRuth
Dixon del Tufo member Linda Cornwall
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNLinda
Cornwall member /OGrid/OUKHEP/OUhep.ucl.ac.uk/C
NYee-Ting Li member Paul D. Mealor
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPaul D Mealor
member /OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPaul A
Crosby member David Waters /OGrid/OUKHEP/OUhep.
ucl.ac.uk/CNDavid Waters member Bob Cranfield
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNBob Cranfield
member Ben West /OGrid/OUKHEP/OUhep.ucl.ac.uk/C
NBen West member Rod Walker /OGrid/OUKHEP/OUhe
p.ph.ic.ac.uk/CNRod Walker member
/OGrid/OUKHEP/OUhep.ph.ic.ac.uk/CNPhilip
Lewis member Dave Colling /OGrid/OUKHEP/OUhep.p
h.ic.ac.uk/CNDr D J Colling member Alex Howard
/OGrid/OUKHEP/OUhep.ph.ic.ac.uk/CNAlex Howard
member Roger Barlow /OGrid/OUKHEP/OUhep.man.ac.
uk/CNRoger Barlow member Joe Foster
/OGrid/OUKHEP/OUhep.man.ac.uk/CNJoe Foster
member Alessandra Forti /OGrid/OUKHEP/OUhep.man
.ac.uk/CNAlessandra Forti member Peter Clarke
/OGrid/OUKHEP/OUhep.ucl.ac.uk/CNPeter Clarke
member Andrew Sansum /OGrid/OUKHEP/OUhepgrid.cl
rc.ac.uk/CNAndrew Sansum member John Gordon
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNJohn
Gordon member Andrew McNab /OGrid/OUKHEP/OUhep.
man.ac.uk/CNAndrew McNab member Richard
Hughes-Jones /OGrid/OUKHEP/OUhep.man.ac.uk/CNR
ichard Hughes-Jones member Gavin McCance
/OGrid/OUKHEP/OUph.gla.ac.uk/CNGavin McCance
member Tony Doyle /OGrid/OUKHEP/OUph.gla.ac.uk/
CNTony Doyle admin Alex Martin
/OGrid/OUKHEP/OUph.qmw.ac.uk/CNA.J.Martin
member Steve Lloyd /OGrid/OUKHEP/OUph.qmw.ac.uk
/CNS.L.Lloyd admin John Gordon
/OGrid/OUKHEP/OUhepgrid.clrc.ac.uk/CNJohn
Gordon member
4
Origin of Mass- Rare Phenomenon
All interactions
9 orders of magnitude!
The HIGGS
5
Matter-Antimatter AsymmetryComplex Interactions
Design
Simulation
Complexity
Understanding
6
GridPP Overview
17m 3-year project funded by PPARC through the
e-Science Programme
CERN - LCG (start-up phase) funding for staff
and hardware...
Applications
Operations
EDG - UK Contributions Architecture Testbe
d-1 Network Monitoring Certificates
Security Storage Element R-GMA LCFG MDS
deployment GridSite SlashGrid Spitfire
1.99m
1.88m
Tier - 1/A
3.66m
CERN
5.67m
DataGrid
3.78m
http//www.gridpp.ac.uk
Applications (start-up phase) BaBar CDF/D0
(SAM) ATLAS/LHCb CMS (ALICE) UKQCD
7
GridPP Bridge
Provide architecture and middleware
Future LHC Experiments
Running US Experiments
Build Tier-A/prototype Tier-1 and Tier-2 centres
in the UK and join worldwide effort to develop
middleware for the experiments
Use the Grid with simulated data
Use the Grid with real data
8
GridPP Vision
  • From Web to Grid - Building the next IT
    Revolution
  • Premise
  • The next IT revolution will be the Grid. The
    Grid is a practical solution to the
    data-intensive problems that must be overcome if
    the computing needs of many scientific
    communities and industry are to be fulfilled over
    the next decade.
  • Aim
  • The GridPP Collaboration aims to develop and
    deploy a large-scale science Grid in the UK for
    use by the worldwide particle physics community.

Many Challenges.. Shared distributed
infrastructure For all applications
9
GridPP Objectives
  • SCALE GridPP will deploy open source Grid
    software (middleware) and hardware infrastructure
    to enable the testing of a prototype of the Grid
    for the LHC of significant scale.
  • INTEGRATION The GridPP project is designed to
    integrate with the existing Particle Physics
    programme within the UK, thus enabling early
    deployment and full testing of Grid technology
    and efficient use of limited resources.
  • DISSEMINATION The project will disseminate the
    GridPP deliverables in the multi-disciplinary
    e-science environment and will seek to build
    collaborations with emerging non-PPARC Grid
    activities both nationally and internationally.
  • UK PHYSICS ANALYSES (LHC) The main aim is to
    provide a computing environment for the UK
    Particle Physics Community capable of meeting the
    challenges posed by the unprecedented data
    requirements of the LHC experiments.
  • UK PHYSICS ANALYSES (OTHER) The process of
    creating and testing the computing environment
    for the LHC will naturally provide for the needs
    of the current generation of highly data
    intensive Particle Physics experiments these
    will provide a live test environment for GridPP
    research and development.
  • DATAGRID Open source Grid technology is the
    framework used to develop this capability. Key
    components will be developed as part of the EU
    DataGrid project and elsewhere.
  • LHC COMPUTING GRID The collaboration builds on
    the strong computing traditions of the UK at
    CERN. The CERN working groups will make a major
    contribution to the LCG research and development
    programme.
  • INTEROPERABILITY The proposal is also integrated
    with developments from elsewhere in order to
    ensure the development of a common set of
    principles, protocols and standards that can
    support a wide range of applications.
  • INFRASTRUCTURE Provision is made for facilities
    at CERN (Tier-0), RAL (Tier-1) and use of up to
    four Regional Centres (Tier-2).
  • OTHER FUNDING These centres will provide a focus
    for dissemination to the academic and commercial
    sector and are expected to attract funds from
    elsewhere such that the full programme can be
    realised.

10
GridPP Organisation
11
GridPP Project Map - Elements
12
GridPP Project Map - Metrics and Tasks
Available from Web Pages Provides Structure for
this talk
13
LHC Computing Challenge
1. CERN
1 TIPS 25,000 SpecInt95 PC (1999) 15
SpecInt95
PBytes/sec
Online System
100 MBytes/sec
Offline Farm20 TIPS
  • One bunch crossing per 25 ns
  • 100 triggers per second
  • Each event is 1 Mbyte

100 MBytes/sec
Tier 0
CERN Computer Centre gt20 TIPS
Gbits/sec
or Air Freight
HPSS
Tier 1
RAL Regional Centre
US Regional Centre
French Regional Centre
Italian Regional Centre
HPSS
HPSS
HPSS
HPSS
Tier 2
Tier2 Centre 1 TIPS
Tier2 Centre 1 TIPS
Tier2 Centre 1 TIPS
Gbits/sec
Tier 3
Physicists work on analysis channels Each
institute has 10 physicists working on one or
more channels Data for these channels should be
cached by the institute server
Institute 0.25TIPS
Institute
Institute
Institute
Physics data cache
100 - 1000 Mbits/sec
Tier 4
Workstations
14
LHC Computing GridHigh Level Planning
1. CERN
Prototype of Hybrid Event Store (Persistency
Framework)
Hybrid Event Store available for general users
Distributed production using grid services
Full Persistency Framework
applications
Distributed end-user interactive analysis
Grid as a Service
LHC Global Grid TDR
50 prototype (LCG-3) available
LCG-1 reliability and performance targets
First Global Grid Service (LCG-1) available
15
DataGrid Middleware Work Packages
2. DataGrid
  • Collect requirements for middleware
  • Take into account requirements from application
    groups
  • Survey current technology
  • For all middleware
  • Core Services testbed
  • Testbed 0 Globus (no EDG middleware)
  • Grid testbed releases
  • Testbed 1.2 current release
  • WP1 workload
  • Job resource specification scheduling
  • WP2 data management
  • Data access, migration replication
  • WP3 grid monitoring services
  • Monitoring infrastructure, directories
    presentation tools
  • WP4 fabric management
  • Framework for fabric configuration management
    automatic software installation
  • WP5 mass storage management
  • Common interface for Mass Storage
  • WP7 network services
  • Network services and monitoring
  • Talk
  • GridPP Developing an
  • Operational Grid
  • Dave Colling

16
EDG TestBed 1 Status30 Aug 2002 1738
  • Web interface showing status of (400) servers
    at testbed 1 sites
  • Production Centres

17
GridPP Sites in Testbed Status 30 Aug 2002 1738
Project Map
Software releases at each site
18
A CMS Data Grid Job
3. Applications
2003 CMS data grid system vision
  • Demo
  • Current status of the system vision

19
ATLAS/LHCb Architecture
3. Applications
The Gaudi Framework - developed by LHCb
- adopted by ATLAS
(Athena)
20
GANGA Gaudi ANd Grid Alliance
3. Applications
GANGA
GUI
Collective Resource Grid Services
Histograms Monitoring Results
JobOptions Algorithms
GAUDI Program
Making the Grid Work for the Experiments
21
Overview of SAM
3. Applications
SAM
Database Server(s) (Central Database)
Name Server
Global Resource Manager(s)
Log server
Shared Globally
Station 1 Servers
Station 3 Servers
Local
Station n Servers
Station 2 Servers
Mass Storage System(s)
Arrows indicate Control and data flow
Shared Locally
22
Overview of SAM
3. Applications
SAM
SAM and DataGrid using common (lower)
middleware
23
and the Grid
3. Applications
Running Experiment at SLAC (San
Francisco) Producing many Terabytes of useful
data (500 TByte Objectivity Database)
Computationally intense analysis 500
physicists spread over 72 institutes in 9
countries 50 in UK Scale forces move from
central to distributed computing
1/3-size prototype for LHC experiments In place
must respect existing practice Running and
needing solutions today
24
Experiment Deployment
25
Advertisement
3. Applications
QCDGrid
  • Talk
  • The Quantum
  • ChromoDynamics Grid
  • James Perry

26
Tier-0 - CERN
4. Infrastructure
Commodity Processors IBM (mirrored) EIDE Disks..

Compute Element (CE) Storage Element (SE) User
Interface (UI) Information Node (IN)
Storage Systems..
2004 Scale 1,000 CPUs 0.5 PBytes
27
UK Tier-1 RAL
4. Infrastructure
New Computing Farm 4 racks holding 156 dual
1.4GHz Pentium III cpus. Each box has 1GB of
memory, a 40GB internal disk and 100Mb ethernet.
Tape Robot upgraded last year uses 60GB STK 9940
tapes 45TB currrent capacity could hold 330TB.
50TByte disk-based Mass Storage Unit after RAID 5
overhead. PCs are clustered on network switches
with up to 8x1000Mb ethernet out of each rack.
2004 Scale 1000 CPUs 0.5 PBytes
28
Network
4. Infrastructure
  • Internal networking is currently a hybrid of
  • 100Mb(ps) to nodes of cpu farms
  • 1Gb to disk servers
  • 1Gb to tape servers
  • UK academic network SuperJANET4
  • 2.5Gb backbone upgrading to 20Gb in 2003
  • EU SJ4 has 2.5Gb interconnect to Geant
  • US New 2.5Gb link to ESnet and Abilene for
    researchers
  • UK involved in networking development
  • internal with Cisco on QoS
  • external with DataTAG

29
Regional Centres
Local Perspective Consolidate Research
Computing Optimisation of Number of
Nodes? 4 Relative size dependent on funding
dynamics
SRIF Infrastructure
Global Perspective V. Basic Grid Skeleton
30
UK Tier-2 ScotGRID
4. Infrastructure
  • ScotGrid Processing nodes at Glasgow
  • 59 IBM X Series 330 dual 1 GHz Pentium III with
    2GB memory
  • 2 IBM X Series 340 dual 1 GHz Pentium III with
    2GB memory and dual ethernet
  • 3 IBM X Series 340 dual 1 GHz Pentium III with
    2GB memory and 100 1000 Mbit/s ethernet
  • 1TB disk
  • LTO/Ultrium Tape Library
  • Cisco ethernet switches
  • ScotGrid Storage at Edinburgh
  • IBM X Series 370 PIII Xeon with 512 MB memory 32
    x 512 MB RAM
  • 70 x 73.4 GB IBM FC Hot-Swap HDD
  • Griddev testrig at Glasgow
  • 4 x 233 MHz Pentium II
  • BaBar UltraGrid System at Edinburgh
  • 4 UltraSparc 80 machines in a rack 450 MHz CPUs
    in each 4Mb cache, 1 GB memory
  • Fast Ethernet and Myrinet switching
  • CDF equipment at Glasgow
  • 8 x 700 MHz Xeon IBM xSeries 370 4 GB memory 1
    TB disk

2004 Scale 300 CPUs 0.1 PBytes
31
GridPP Context
5. Interoperability
32
Grid issues Coordination
  • Technical part is not the only problem
  • Sociological problems? resource sharing
  • Short-term productivity loss but long-term gain
  • Key? communication/coordination between
    people/centres/countries
  • This kind of world-wide close coordination across
    multi-national collaborations has never been done
    in the past
  • We need mechanisms here to make sure that all
    centres are part of a global planning
  • In spite of different conditions of funding,
    internal planning, timescales etc
  • The Grid organisation mechanisms should be
    complementary and not parallel or conflicting to
    existing experiment organisation
  • LCG-DataGRID-eSC-GridPP
  • BaBar-CDF-D0-ALICE-ATLAS-CMS-LHCb-UKQCD
  • Local Perspective build upon existing strong PP
    links in the UK to build a single Grid for all
    experiments

33
Authentication/Authorization
  • Authentication (CA Working Group)
  • 11 national certification authorities
  • policies procedures ? mutual trust
  • users identified by CAs certificates
  • Authorization (Authorization Working Group)
  • Based on Virtual Organizations (VO).
  • Management tools for LDAP-based membership lists.
  • 61 Virtual Organizations

2. DataGrid
5. Interoperability Built In
34
Current User Base Grid Support Centre
5. Interoperability
  • GridPP (UKHEP) CA uses primitive technology
  • It works but takes effort
  • 201 personal certs issued
  • 119 other certs issued
  • GSC will run a CA for UK escience CA
  • Uses openCA Registration Authority uses web
  • We plan to use it
  • Namespace identifies RA, not Project
  • Authentication not Authorisation
  • UK e-Science
  • Certification
  • Authority
  • Through GSC we have access to skills of CLRC eSC
  • Use helpdesk to formalise support later in the
    rollout

35
Trust Relationships
5. Interoperability
36
DisseminationGodels Theorem?
6. Dissemination
Project Map Elements
37
From Grid to Web using GridSite
6. Dissemination
t0
t1
38
17m 3-Year Project
7. Finances
  • Five components
  • Tier-1/A Hardware CLRC ITD Support Staff
  • DataGrid DataGrid Posts CLRC PPD Staff
  • Applications Experiments Posts
  • Operations Travel Management e Early
    Investment
  • CERN LCG posts Tier-0 e LTA

39
GridPP Achievements and Issues
  • 1st Year Achievements
  • Complete Project Map
  • Applications Middleware Hardware
  • Fully integrated with EU DataGrid and LCG
    Projects
  • Rapid middleware deployment /testing
  • Integrated US-EU applications development e.g.
    BaBarEDG
  • Roll-out document for all sites in the UK (Core
    Sites, Friendly Testers, User Only).
  • Testbed up and running at 15 sites in the UK
  • Tier-1 Deployment
  • 200 GridPP Certificates issued
  • First significant use of Grid by an external user
    (LISA simulations) in May 2002
  • Web page development (GridSite)
  • Issues for Year 2
  • Status 30 Aug 2002 1738 GMT monitor
    and improve testbed deployment efficiency from
    Sep 1
  • Importance of EU-wide development of middleware
  • Integrated Testbed for use/testing by all
    applications
  • Common integration layer between middleware and
    application software
  • Integrated US-EU applications development
  • Tier-1 Grid Production Mode
  • Tier-2 Definitions and Deployment
  • Integrated Tier-1 Tier-2 Testbed
  • Transfer to UK e-Science CA
  • Integration with other UK projects e.g.
    AstroGrid, MyGrid

40
Summary
  • Grid success is fundamental for PP
  • CERN LCG, Grid as a Service.
  • DataGrid Middleware built upon Globus and
    Condor-G. Testbed 1 deployed.
  • Applications complex, need to interface to
    middleware.
  • LHC Analyses ongoing feedback/development.
  • Other Analyses have immediate requirements.
    Integrated using Globus, Condor, EDG tools
  • Infrastructure Tiered computing to the
    physicist desktop
  • Scale in UK? 1 PByte and 2,000 distributed CPUs
  • GridPP in Sept 2004
  • Integration ongoing
  • Dissemination
  • Co-operation required with other
    disciplines/industry
  • Finances under control
  • Year 1 was a good starting point. First Grid jobs
    have been submitted..
  • Looking forward to Year 2. Web services ahead..

41
Holistic ViewMulti-layered Issues
e-Science
applications
infrastructure
middleware
Write a Comment
User Comments (0)
About PowerShow.com