Service Challenge Phase 4: - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Service Challenge Phase 4:

Description:

Service Challenge Phase 4: Piano di attivit e impatto sulla infrastruttura di rete ... e impatto sulla infrastruttura di rete. Tiziana Ferrari. INFN CNAF ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 15
Provided by: Tiziana5
Category:

less

Transcript and Presenter's Notes

Title: Service Challenge Phase 4:


1
  • Service Challenge Phase 4
  • Piano di attività e impatto sulla infrastruttura
    di rete
  • Tiziana Ferrari
  • INFN CNAF
  • CCR, Roma, 20-21 Ottobre 2005

2
Service Challenge 3
  • Now in SC3 Service Phase (Sep Dec 2005)
  • ALICE, CMS and LHCb have all started their
    production
  • ATLAS are preparing for November 1st start
  • Service Challenge 4 May 2006, Sep 2006

3
SC3 INFN Sites (1/2)
  • CNAF
  • LCG File Catalogue (LFC)
  • File Transport Service (FTS)
  • Myproxy server and BDII
  • Storage
  • CASTOR with SRM interface
  • Interim installations of sw components from some
    of the experiments (not currently available from
    LCG) - VObox

4
SC3 INFN sites (2/2)
  • Torino (ALICE)
  • FTS, LFC, dCache (LCG 2.6.0)
  • Storage Space 2 TBy
  • Milano (ATLAS)
  • FTS, LFC, DPM 1.3.7
  • Storage space 5.29 TBy
  • Pisa (ATLAS/CMS)
  • FTS, PhEDEx, POOL file cat, PubDB, LFC, DPM 1.3.5
  • Storage space 5 TBy available, 5 TBy expected
  • Legnaro (CMS)
  • FTS, PhEDEx, Pool file cat., PubDB, DPM 1.3.7 (1
    pool, 80 Gby)
  • Storage space 4 TBy
  • Bari (ATLAS/CMS)
  • FTS, PhEDEx, POOL file cat., PubDB, LFC, dCache,
    DPM
  • Storage space 5 TBy available
  • LHCb
  • CNAF

5
INFN Tier-1 SC4 short-term plans (1/2)
  • SC4 storage and computing resources will be
    shared with production
  • Storage
  • Oct 2005 Data disk
  • 50 TB (Castor front-end)
  • WAN?Disk performance 125 MB/s (demonstrated,
    SC3)
  • Oct 2005 Tape
  • 200 TB (4 9940B 6 LTO2 drives)
  • Drives shared with production
  • WAN?Tape performance mean sustained 50 MB/s
    (SC3, throughput phase, July 2005)
  • Computing
  • Oct 2005 min 1200 kSI2K, max1550 kSI2K (as the
    farm is shared)

6
INFN Tier-1 SC4 short-term plans (2/2)
  • Network
  • Oct 2005
  • 2 x 1 GEthernet links CNAF ?? GARR, dedicated to
    SC traffic to/from CERN
  • Future
  • ongoing upgrade to 10 GEthernet, CNAF ? ? GARR,
    dedicated to SC
  • Usage of policy routing at the GARR access point
  • Type of connectivity to INFN Tier-2 under
    discussion
  • Backup link Tier-1 ? ?Tier-1 (Karlsruhe) under
    discussion
  • Software
  • Oct 2005
  • SRM/Castor and FTS
  • farm middleware LCG 2.6
  • Future
  • dCache and StoRM under evaluation (for disk-only
    SRMs)
  • Possibility to upgrade to CASTOR v2 under
    evaluation (end of year 2005)

7
What target rates for Tier-2s?
  • LHC Computing Grid Technical Design Report
  • Access link at 1 Gb/s by the time LHC starts
  • Traffic 1 Tier-1 ? ? 1 Tier-2 10 Traffic
    Tier-0 ? ? Tier-1
  • Estimations of traffic Tier-1 ? ? Tier-2
    represent an upper limit
  • Tier-2 ? ? Tier-2 replications will lower loan on
    the Tier-1

WSF with safety factor
8
TDR bandwidth estimation
  • LHC Computing Grid Technical Design Report
    (giugno 2005)http//lcg.web.cern.ch/LCG/tdr/

9
Expected rates (rough / with_safety-factor)
10
Necessita vs specifiche Tier-2
11
The Worldwide LHC Computing GridLevel 1 Service
Milestones
  • 31 Dec 05
  • Tier-0/1 high-performance network operational at
    CERN and 3 Tier-1s.
  • 31 Dec 05
  • 750 MB/s data recording demonstration at CERN
    Data generator disk tape sustaining 750 MB/s for
    one week using the CASTOR 2 mass storage system.
  • Jan 06 Feb 06
  • Throughput tests
  • SC4a 28 Feb 06
  • All required software for baseline services
    deployed and operational at all Tier-1s and at
    least 20 Tier-2 sites
  • Mar 06
  • Tier-0/1 high-performance network operational at
    CERN and 6 Tier-1s, at least 3 via GEANT.
  • SC4b 30 Apr 06
  • Service Challenge 4 Set-up Set-up complete and
    basic service demonstrated, capable of running
    experiment-supplied packaged test jobs, data
    distribution tested.
  • 30 Apr 06
  • 1.0 GB/s data recording demonstration at CERN
    Data generator disk tape sustaining 1.0 GB/s for
    one week using the CASTOR 2 mass storage system
    and the new tape equipment.
  • SC4 31 May 06 Service Challenge 4
  • Start of stable service phase, including all
    Tier-1s and 40 Tier-2 sites.

12
The Worldwide LHC Computing GridLevel 1 Service
Milestones
  • The service must be able to support the full
    computing model of each experiment, including
    simulation and end-user batch analysis at Tier-2
    sites.
  • Criteria for successful completion of SC4 By end
    of service phase (end September 2006)
  • 8 Tier-1s and 20 Tier-2s must have demonstrated
    availability better than 90 of the levels
    specified the WLCG MoU adjusted for sites that
    do not provide a 24 hour service
  • Success rate of standard application test jobs
    greater than 90 (excluding failures due to the
    applications environment and non-availability of
    sites)
  • Performance and throughput tests complete
    Performance goal for each Tier-1 is
  • the nominal data rate that the centre must
    sustain during LHC operation (200 MB/s for CNAF)
  • CERN-disk ? network ? Tier-1-tape.
  • Throughput test goal is to maintain for one week
    an average throughput of 1.6 GB/s from disk at
    CERN to tape at the Tier-1 sites. All Tier-1
    sites must participate.
  • 30 Sept 06
  • 1.6 GB/s data recording demonstration at CERN
    Data generator??? disk ? tape sustaining 1.6 GB/s
    for one week using the CASTOR mass storage
    system.
  • 30 Sept 06
  • Initial LHC Service in operation Capable of
    handling the full nominal data rate between CERN
    and Tier-1s. The service will be used for
    extended testing of the computing systems of the
    four experiments, for simulation and for
    processing of cosmic-ray data. During the
    following six months each site will build up to
    the full throughput needed for LHC operation,
    which is twice the nominal data rate.
  • 24 hour operational coverage is required at all
    Tier-1 centres from January 2007

13
Milestones in brief
  • Jan 2006
  • throughput tests (rate targets not specified)
  • May 2006
  • high-speed network infrastructure operational
  • By Sep 2006
  • avg throughput of 1.6 GB/s from disk at CERN to
    tape at the Tier-1 sites (nominal rate for LHC
    operation, 200 MB/s for CNAF)
  • Oct 2006 Mar 2007
  • avg throughput up to twice the nominal rate

14
Target (nominal) data rates for Tier-1 sites and
CERN in SC4
Write a Comment
User Comments (0)
About PowerShow.com