les robertson cernit 1 - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

les robertson cernit 1

Description:

In parallel with experiments' data challenges, we will start a series of Service Challenges ... Rome. Taipei. TRIUMF. CSCS. Legnaro. UB. IFCA. IC. MSU. Prague ... – PowerPoint PPT presentation

Number of Views:20
Avg rating:3.0/5.0
Slides: 21
Provided by: lesr150
Category:
Tags: cernit | les | robertson | rome | series | the

less

Transcript and Presenter's Notes

Title: les robertson cernit 1


1
CERN-C-RRB-2004-011
  • The LHC Computing Grid Project
  • Status Report
  • Computing RRB 27 April 2004
  • Les Robertson LCG Project Leader
  • CERN European Organization for Nuclear Research
  • Geneva, Switzerland
  • les.robertson_at_cern.ch

2
Outline
  • Supplementary information to the presentation in
    Mondays session
  • The LCG grid service
  • EGEE and Grid3/OSG
  • Milestone chart
  • Requirements for Phase 2
  • Staffing
  • Conclusion

3
Hewlett Packard to provide Tier 2-like services
for LCG, initially in Puerto Rico
4
GOC
Secure Database Management via HTTPS / X.509
GOC GridSite MySQL
Monitoring
Resource Centre Resources Site Information EDG,
LCG-1, LCG-2,
bdii
ce
se
rb
RC
5
(No Transcript)
6
Global Grid User Support - GridKa
  • Grid-wide service for Grid specific problems
  • Problem tracking and management
  • Grid help desk
  • Interacts with
  • Experiment User Support
  • Expert users
  • Site user support centres (help desks)
  • Grid Operations Centres (RAL, Taipei)
  • Grid infrastructure coordination centre at CERN
  • Started with support for GridKa Users (02 Oct.
    2003)
  • Wider support starting now

7
LCG-2 Support Agreements
Most of these services are covered only
by informal agreementsalthough
substantialresources may beinvolved
8
LCG-2 and Next Generation Middleware
LCG-2
Next Generation
prototype
product development
mainline service
  • LCG-2 will be the main service for the 2004 data
    challenges
  • This will provide essential experience on
    operating and managing a global grid service
    and will be supported and developed to achieve
    these goals
  • Target is to establish a base (fallback) solution
    for early LHC years
  • The LCG-2 middleware will be maintained and
    developed until the new generation has proven
    itself
  • This puts pressure on the new middleware
    developers to replace and add components rather
    than require a complete change

9
Service Challenges preparing for LHC Data Taking
  • In parallel with experiments data challenges, we
    will start a series of Service Challenges
  • Learn how to operate the grid reliably for
    extended periods
  • Develop and operate data distribution services
  • Mass-store to mass store
  • Network performance, architecture
  • Initially prototyping then pilot services
  • First discussions with regional centre staff at
    HEPiX workshop at end of May
  • Also discussing collaboration with GEANT/NRENs

10
Toward the US Open Science Grid
  • Building partnerships on US Grid infrastructure
    for LHC and other sciences
  • LHC application driving this effort, Grid3 is a
    great initial step
  • Federate US resources with the LCG, the EGEE and
    other national and international
    Grids
  • US LHC experiment projects, regional centers,
    universities and Grid projects
  • formulated a roadmap towards the Open Science
    Grid

11
LCG EGEE Grid3/OSG
  • LCG must prepare for a long-term operation
  • it must include all the computing resources
    available for LHC Europe, US, Asia,..
  • it must make it as easy as possible for
    experiments to use these resources
  • This will require some form of inter-operability
    between grids that use different technologies,
    standards
  • So, for the foreseeable future, - LCG will
    have a coordinating role as well as an operating
    role
  • EGEE and LCG (operating role) are closely
    integrated by design
  • Operations same management, shared
    infrastructure
  • Middleware EGEE starts with the LCG-2 package
    - and we evolve together
  • HEP Applications activity in EGEE - embedded in
    LCG distributed analysis activity
  • Representation in each others management
    committees
  • We still need to work out how the service is
    governed - policy, strategy, etc. LCG does this
    with a Grid Deployment Board national
    representatives VOs
  • We are actively discussing how we synchronise
    with Grid3/OSG in the US

12
(No Transcript)
13
Phase 2 Planning
14
RAL
small centres
Santiago
desktops portables
Tier-2
Weizmann
Tier-1
IN2P3
FNAL
  • LHC Computing Model (simplified!!)
  • Tier-0 the accelerator centre
  • Filter ? raw data
  • Reconstruction ? summary data (ESD)
  • Record raw data and ESD
  • Distribute raw and ESD to Tier-1
  • Tier-1
  • Permanent storage and management of raw, ESD,
    calibration data, meta-data, analysis data and
    databases? grid-enabled data service
  • Data-heavy analysis
  • Re-processing raw ? ESD
  • National, regional support

CNAF
FZK
PIC
ICEPP
BNL
online to data acquisition process -- high
availability -- managed mass storage --
long-term commitment
15
RAL
small centres
Santiago
desktops portables
Tier-2
Weizmann
Tier-1
IN2P3
FNAL
CNAF
FZK
  • Tier-2
  • Well-managed disk storage grid-enabled
  • Simulation
  • End-user analysis batch and interactive
  • High performance parallel analysis (PROOF)

PIC
ICEPP
BNL
16
Estimates prepared as input to the MoU Task
Force Computing models under active development
17


Does not include traffic between Tier-1s
and with Tier-2s
18
Planning for the Phase 2 Grid
  • The Phase 2 services at least for Tier-0 and
    Tier-1 centres must be in operation by September
    2006
  • Acquisition process for the scale of computing
    required is very long in some centres -- and is
    starting now at CERN
  • ? Tier-0, Tier-1 centres will be well into their
    planning and acquisition process before the MoU
    is signed and the TDR completed (July 2005)

Prototype services ? pilot services
19
LCG Staffing for the Common Activities at CERN
Experiments
EGEE Phase 1
External funding LCG Phase 1
Funding not yetidentified
20
Conclusion
  • The first products of the common applications
    project are in use by the experiments
  • The nucleus of a grid infrastructure for LHC is
    in place
  • Beginning now to learn about data handling,
    operation, management
  • Phase 1 of the project has been possible because
    of your contributions to the common activities at
    CERN as well as your investments in developments
    and services in regional centres.
  • LCG must exploit short-term projects (like EGEE)
    but remembering that LHC has long-term
    requirements
  • Agreement on how to incorporate US grid resources
    is an important goal for the coming months
  • Estimates are available for the capacity and
    networking requirements for Tier-1 and Tier-2
    centres the numbers and the computing model
    will evolve, but detailed planning for these
    centres can begin
  • Important that regional centres have resources to
    participate in Service Challenges to learn and
    validate their operation over the next two years
  • Funding for the staffing of the common activities
    at CERN remains a major concern for the project
Write a Comment
User Comments (0)
About PowerShow.com