Grid Computing Status Report - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Grid Computing Status Report

Description:

Tier-0 : measurement center (CERN) Dedicated computers (L2/L3 trigger farms) ... I'll just build my own, this one stinks. Jeff Templon NIKHEF SAC, 2005.05.20 - 7 ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 14
Provided by: jeffte
Category:

less

Transcript and Presenter's Notes

Title: Grid Computing Status Report


1
Grid Computing Status Report
  • Jeff Templon
  • PDP Group, NIKHEF

NIKHEF Scientific Advisory Committee 20 May 2005
2
HEP Computing Model
  • Tier-0 measurement center (CERN)
  • Dedicated computers (L2/L3 trigger farms)
  • Archival of raw data
  • Tier-1 data centers
  • Archival of 2nd copy of raw data
  • Large-scale computing farms (e.g. reprocessing)
  • Spread geographically
  • Strong Support
  • Tier-2 user facilities for data analysis /
    Monte Carlo

3
Worldwide HEP Computing Needs
4
Amsterdam Tier-1 for LHC
  • Three experiments LHCb / ATLAS / ALICE
  • Overall scale determined by estimating funding in
    NL
  • Contribution to experiments scaled by NIKHEF
    presence321
  • Resulting NIKHEF share of total Tier-1 needs
  • LHCb 23
  • ATLAS 11,5
  • ALICE 5,75

5
Amsterdam Tier-1 Numbers
  • Status GOOD!
  • Basic collaboration with SARA in place
  • Attitude adjustment needed (response time)
  • Appropriate funding line in NCF long-term draft
    plan
  • Just enough concerns about me-too (grids are
    popular)
  • Community-building (VL-E project)
  • Pull me-too people into same infrastructure

6
Overall Status LHC Computing
  • LCG a successful service
  • 14,000 CPUs and well-ordered operations, active
    community
  • Monte Carlo productions working well (next slide)
  • Data Management a problem
  • Software never converged in EDG
  • May not be converging in EGEE (same team)
  • Risk losing HEP community on DM
  • Makes community-forming (generic middleware)
    difficult Ill just build my own, this one
    stinks

7
Results of Data Challenge 04
  • Monte Carlo tasks distributed to computers across
    world
  • Up to 3000 simultaneous jobs per experiment
  • 2.2 million CPU-hours (250 years) used in one
    month
  • Total data volume gt 25 TB

For LHCb NIKHEF 6 of global total
  • See it in action
  • Backup

8
Transport of primary data to Tier-1s
?
9
The Dutch Contribution
LCG Service Challenge II
10
Local Status
  • Positioned well in LCG EGEE
  • Present on blessed Tier-1 list
  • One of best run sites
  • One of first sites (3 in EDG, compare 4 in WWW)
  • Membership on
  • Middleware Design Team (US collaboration here
    too)
  • Project Technical Forum
  • LCG Grid Applications Group (too bad, almost
    defunct)
  • Middleware Security Group
  • Etc etc etc
  • D. Groep chairs world-recognized EUGridPMA
  • K. Bos chairs LHC Grid Deployment Board

11
Local Status 2
  • NIKHEF grid site
  • Roughly 300 CPUs / 10 terabytes of storage
  • Several distinct components
  • LCG / VL-E production
  • LCG pre-production
  • EGEE testing
  • VL-E certification
  • Manpower 8 staff, interviews this week for three
    more (project funding)

12
PDP Group Activities
  • Middleware (3 FTE)
  • Mostly security -- best bang for buck local
    world expert
  • Operations (3 FTE)
  • How does one operate a terascale / kilocomputer
    site?
  • Knowledge transfer to SARA (they have support
    mandate)
  • Contribute regularly to operational middleware
  • Applications (3 FTE)
  • Strong ties to local HEP (ATLAS Rome
    production, LHCb Physics Performance Report, D0
    SAMGrid)
  • Community forming LOFAR KNMI, looking for
    others

13
Industrial Interest
GANG Hosted _at_ NIKHEF IBM, LogicaCMG, Philips,
HPC, UvA, SARA, NIKHEF 16 industrial
participants (24 total)
Write a Comment
User Comments (0)
About PowerShow.com