Report from XXII HTASC - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Report from XXII HTASC

Description:

... get this group going at the moment! ... the same CU performance) as usual. HEPCCC, 18 October 2002. Tobias Haas. 27 ... VPN capability over public networks. ... – PowerPoint PPT presentation

Number of Views:26
Avg rating:3.0/5.0
Slides: 43
Provided by: tob57
Category:
Tags: htasc | xxii | goingover | report

less

Transcript and Presenter's Notes

Title: Report from XXII HTASC


1
Report from XXII HTASC
  • Tobias Haas
  • HEPCCC
  • 18 October 2002

2
General Remarks
  • Last HTASC 26/27 September at CERN
  • Upcoming meetings
  • 13/14 March 2003, CERN
  • 5/6 June, Possibly outside of CERN (Pisa?)
  • Topics
  • Reports from the Working Groups
  • Final report from W2K by Gian Piero Siroli
  • Other Topics
  • Status of LCG Application Domain (Torre Wenaus)
  • Status of PASTA III Report (David Foster)
  • Round table discussion

3
XII HTASC Agenda
26 September (Location
14-4-030) 1400 Welcome Tobias
Haas Introduction Tobias Haas Approval of
Minutes, etc All Report from HEPCCC Tobias
Haas -- Coffee Break 1530 Win2K Working
Group Michel Jouvin/Gian Piero
Siroli
27 September
(Location SALLE B (61-1-009) ) 0900 LCG
Application Domain Torre Wenaus -- Coffee Break
1030 Discussion and Site Reports, etc 1200
New PASTA Report David Foster 1300 Adjourn
4
HEPCCC AI to HTASC
  • 76 HTASC Scientific Secretary
  • Handle in private
  • 77 HTASC Membership
  • See following slide
  • 83 Security Subgroup summary
  • Given to D. Jacobs
  • 87 Capacity, size and cost for European NRENs

5
AI77 HTASC Membership
  • HEPCCC asked us to confirm our membership (HEPCCC
    AI 77)
  • HTASC has 18 regular members, 2 observers, 1
    chairperson
  • Contacted the 18 rECFA representatives for the 18
    countries for confirmation
  • 13 members were confirmed
  • 1 new member was nominated (Christine
    Kourkourmelis nominated Emanuel Floratos for Ion
    Siotis)
  • 2 countries are looking for a new candidate
    (Norway and UK)
  • 2 time no reply.

6
AI87 Size/Cost of NRENs
HTASC is asked to collect capacity, size and cost
figures for the other European NRENs in the same
format as was done at the 6/7 June 2002 meeting
for the Nordic NRENs. In doing so a closer
definition of user should be made (also for the
Nordic countries).
  • Lively and controversial discussion in HTASC.
  • Points of discussion
  • NREN definition?
  • What organizations manage the NRENs be identified
    in different countries?
  • Do these organizations work in similar ways?
  • What are the differences?
  • Can the term user be defined reasonably
    accurately?
  • What accuracies could the figures possibly have?
  • What will be the use of the figures obtained?

7
AI87 Size/Cost of NRENsConclusion
  • NRENs work in quite different ways in different
    countries.
  • In particular accounting practices are quite
    different
  • Definition of user appears problematic
  • Numbers would have large uncertainties.
  • HTASC does not want to provide numbers which
    could have interesting political use but which
    would have large uncertainties.

8
Report from the Win2K working group
Gian Piero Siroli, Physics Dept., Univ. of
Bologna and INFN
9
Windows 2000 Coordination Subgroup
  • Initial 12 months mandate (spring 2000-spring
    2001) extended until mid 2002
  • Report in April 2001, intermediate status report
    in May 2002
  • Activity
  • periodic closed meetings (limited number of HEP
    labs representatives also via videoconf), with
    roundtable discussions, brainstorming, few formal
    presentations
  • organization of HEPiX/HEPNT joint meetings
  • Goal coordination among labs, discussion of
    topics of common interest, follow/drive W2K
    migration

10
Windows 2000 Coordination Subgroup
  • Topics of discussion
  • Migration W/NT (major step) W2K (easier) XP
  • DNS interoperability problems in the past with
    non-MS DNS, now solved
  • Application deployment and support SMS, AD, MSI
    sharing discarded
  • Security patches, security policies not publicly
    advertised, antivirus s/w, firewalls
  • Wide area access across labs (file/resource
    access) web folders, OpenAFS, VPNs
  • Printing services management (CERN Printer
    Wizard) and mailing (Exchange 2000)
  • Win/UNIX integration work on Kerberos in
    progress
  • Future interesting features of .NET (tests), web
    services (XML, SOAP), GLOBUS toolkit

11
Windows 2000 Coordination Subgroup
  • Outcome
  • General consensus that regular meetings are a
    very useful forum to share experience and
    exchange information across HEP labs (site
    reports, different strategies in different labs)
  • Not much coordination needed no W2K HEP-wide
    forest infrastructure, security policies
    different for each lab (laptops?)
  • Follow future evolution of Windows platforms and
    track HEP Win technology
  • Windows/UNIX integration and interoperability
    (avoid religious wars)
  • Future
  • Close HTASC link? Limited time mandate
  • Continue this kind of meetings need an official
    body (to organize HEPiX/HEPNT)?

12
HTASC Conclusion
HTASC has received the final report of the Win2K
working group. The mandate of the group has been
accomplished and HTASC expresses its gratitude to
the group and in particular to the conveners,
Michel Jouvin and Gian Piero Siroli, for the
excellent work done. The group has raised a
number of issues of a more general nature such as
the integration and security aspects of laptops,
wide-area access to resources and technical
questions related to the ongoing migration to
Win/XP. HTASC encourages these topics to be
discussed further within the framework of the
HEPIX/HEPNT group.
13
Win2K Conclusions Contd
  • A number of interesting issues brought up which
    will be followed up
  • Wide-area access to resources
  • VPNs
  • Laptops and other portable devices
  • Across-labs authentication
  • Wandering Physicist Problem (term coined by I.
    Gaines aka people-centric ce)
  • Will pick this up in one of our next meetings.

14
Remark on Videoconferencing Subgroup
  • Currently H. Frese chair, but extremely busy with
    other obligations.
  • Agreed with H. Frese to look for another
    chairperson.
  • Asked two highly qualified people, but both
    declined.
  • Conclusion Cannot get this group going at the
    moment!

15
A few highlights from the topical presentations
16
The LCG Applications Area
  • Torre Wenaus, BNL/CERN
  • LCG Applications Area Manager
  • http//cern.ch/lcg/peb/applications
  • XXII HTASC Meeting
  • September 27, 2002

17
Outline
  • LCG and Applications Area introduction
  • Project management organization, planning,
    resources
  • Applications area activities
  • Software Process and Infrastructure (SPI)
  • Math libraries
  • Persistency project (POOL)
  • Architecture blueprint RTAG
  • Conclusion

18
LCG Areas of Work
  • Applications Support Coordination
  • Application Software Infrastructure libraries,
    tools
  • Object persistency, data management tools
  • Common Frameworks Simulation, Analysis, ..
  • Adaptation of Physics Applications to Grid
    environment
  • Grid tools, Portals
  • Grid Deployment
  • Data Challenges
  • Grid Operations
  • Network Planning
  • Regional Centre Coordination
  • Security access policy
  • Computing System
  • Physics Data Management
  • Fabric Management
  • Physics Data Storage
  • LAN Management
  • Wide-area Networking
  • Security
  • Internet Services
  • Grid Technology
  • Grid middleware
  • Standard application services layer
  • Inter-project coherence/compatibility

19
LHC Computing Grid Project
  • Early Status Report PASTA III
  • 26 September 2002
  • David Foster, CERN
  • David.foster_at_cern.ch
  • http//david.web.cern.ch/david/pasta/pasta2002.htm

20
Participants
  • A Semiconductor Technology
  • Ian Fisk (UCSD) Alessandro Machioro (CERN) Don
    Petravik (Fermilab)
  • BSecondary Storage
  • Gordon Lee (CERN) Fabien Collin (CERN) Alberto
    Pace (CERN)
  • CMass Storage
  • Charles Curran (CERN) Jean-Philippe Baud (CERN)
  • DNetworking Technologies
  • Harvey Newman (Caltech) Olivier Martin (CERN)
    Simon Leinen (Switch)
  • EData Management Technologies
  • Andrei Maslennikov (Caspur) Julian
    Bunn (Caltech) 
  • FStorage Management Solutions
  • Michael Ernst (Fermilab) Nick Sinanis (CERN/CMS)
    Martin Gasthuber (DESY )
  • GHigh Performance Computing Solutions
  • Bernd Panzer (CERN) Ben Segal (CERN) Arie Van
    Praag (CERN)

21
Current Status
  • Most reports in the final stages.
  • Networking is the last to complete.
  • Some cosmetic treatments needed.
  • Realistic objective is Mid-October.

22
Basic System Components- Hardware
  • Memory capacity increased faster than predicted,
    costs around 0.15 /Mbit in 2003 and 0.05 /Mbit
    in 2006
  • Many improvements in memory systems 300 MB/sec in
    1999 now in excess of 1.2 GB/sec in 2002.
  • PCI bus improvements improved from 130MB/sec in
    1999 to 500 MB/second with 1GB/sec foreseen.
  • Intel and AMD continue as competitors. Next
    generation AMD (Hammer) permits 32bit and 64bit
    code. And is expected to be 30 cheaper than
    equivalent Intel 64bit chips.

23
Basic System Components- Processors
  • 1999 Pasta report was conservative in terms of
    clock speed
  • BUT, clock speed is not a good measure with
    higher clock
  • speed CPUs sometimes giving lower performance in
    some cases

Specint 2000 numbers for high-end CPU. Not a
direct correlation with CERN Units. P4 Xenon
824 SI2000 but only 600 CERN units Compilers
have not made great advances but instruction
level Parallelism gives you now 70 usage (CERN
Units) of quoted performance.
24
Basic System Components- Processors
Performance evolution and associated cost
evolution for both High-end machines (15K for
quad processor) and Low-end Machines (2K for
dual CPU) Note 2002 predictions revised down
slightly from the 1999 Predictions of actual
system performnace
Fairly steep curve leading to LHC startup
suggesting delayed purchases will save money
(less CPUs for the same CU performance) as usual
25
Basic System ComponentsSome conclusions
  • No major surprises so far, but
  • New semiconductor fabs very expensive squeezing
    the semiconductor marketplace.
  • MOS technology is pushing again against physical
    limits gate oxide thickness, junction volumes,
    lithography, power consumption.
  • Architectural designs are not able to efficiently
    use the increasing transistor density (20
    performance improvement)
  • A significant change in the desktop market
    machine architecture and form factor could change
    the economics of the server market.
  • Do we need a new HEP reference application ?
  • Using industry benchmarks still do not tell the
    whole story and we are interested in throughput.
  • Seems appropriate with new reconstruction/analysis
    models and code

26
Tapes - 1
  • New format tape drives (9840, 9940, LTO) are
    being tested.
  • Current Installation are 10 STK silos capable of
    taking 800 new format tape drives. Today tape
    performance is 15MB/sec so theoretical aggregate
    is 12GB/sec
  • Cartridge capacities expected to increase to 1TB
    before LHC startup but its market demand not
    technical limitations that is the driver.
  • Using tapes as a random access device is a
    problem and will continue to be.
  • Need to consider a much larger, persistent disk
    cache for LHC reducing tape activity for analysis.

27
Tapes - 2
  • Currents costs are about 50 CHF/slot for a tape
    in the Powderhorn robot.
  • Current tape cartridge (9940A) costs 130 CHF with
    a slow decrease over time.
  • Media dominates the overall cost and a move
    higher capacity cartridges and tape units
    sometimes require a complete media change.
  • Current storage costs 0.6-1.0 CHF/GB in 2000
    could drop to 0.3 CHF/GB in 2005 but probably
    would require a complete media change.
  • Conclusions No major challenges for tapes for
    LHC startup but the architecture should be such
    that they are used better than today (write/read)

28
Networking
  • Major cost reductions have taken place in
    wide-area bandwidth costs.
  • 2.5 Gbit common for providers but not academic in
    1999. Now, 10Gbit common for providers and
    2.5Gbit common for academic.
  • Expect 10GBit by end 2002. Vastly exceeds the
    target of 622 Mbit by 2005.
  • Wide area data migration/replication now feasible
    and affordable.
  • Tests of multiple streams to the US running over
    24hrs at the full capacity of 2Gbit/sec were
    successful.
  • Local area networking moving to 10 Gbit/sec and
    this is expected to increase. 10Gbit/sec NICs
    under development for end systems.

29
Networking Trends
  • Transitioning from 10Gbit to 20-30 Gbit seems
    likely.
  • MPLS (Multiprotocol Label Switching) has gained
    momentum. It provides secure VPN capability over
    public networks. A possibility for tier-1 center
    connectivity.
  • Lambda networks based on dark fiber are also
    becoming very popular. It is a build-yourself
    network and may also be relevant for the grid and
    center connectivity.

30
Storage - Architecture
  • Possibly the biggest challenge for LHC
  • Storage architecture design
  • Data management. Currently very poor tools and
    facilities for managing data and storage systems.
  • SAN vs NAS debate still alive
  • SAN, scalability and availability
  • NAS, Cheaper and easier
  • Object storage technologies appearing
  • Intelligent storage system able to manage the
    objects it is storing

31
Storage Management
  • Very little movement in the HSM space since the
    last PASTA report.
  • HPSS still for large scale systems
  • A number of mid-range products (make tape look
    like a big disk) but limited scaling possible
  • HEP still a leader in tape and data management
  • CASTOR, Enstore, JASMine
  • Will remain crucial technologies for LHC.
  • Cluster file systems appearing (StorageTank -
    IBM)
  • Provide unlimited (PB) file system through SAN
    fabric
  • Scale to many 000s of clients (CPU servers).
  • Need to be interfaced to tape management systems
    (e.g. Castor)

32
Storage - Connectivity
  • FiberChannel market growing at 36/year from now
    to 2006 (Gartner). Thisis the current technology
    for SAN implementation.
  • iSCSI or equivalent over gigabit ethernet is an
    alternative (and cheaper) but less performant
    implementation of SAN gaining in popularity.
  • It is expected that gigabit ethernet will become
    a popular transport for storage networks.
  • Infiniband is an initiative that could change the
    lanscape of cluster architectures and has much,
    but varying, industry support.
  • Broad adoption could drive costs down
    significantly
  • NAS/SAN models converging

33
Storage Cost
Cost of managing storage and data are the
predominate costs
34
Storage Scenario - Today
35
Storage Scenario - Future
36
Disk Technology
Specialisation and consolidation of disk
manufacturers
37
Disk Technology Trends
  • Capacity is doubling every 18 months
  • Super Paramagnetic Limit (estimated at 40GB/in2 )
    has not been reached. Seems that a platter
    capacity of 2-3 times todays capacity can be
    foreseen.
  • Perpendicular recording aims to extend the
    density to 500-1000GB/in2. Disks of 10-100 times
    todays capacity seem to be possible. The timing
    will be driven my market demand.
  • Rotational speed and seek times are only
    improving slowly so to match disk size and
    transfer speed disks become smaller and faster.
    2.5 with 23500 RPM are foreseen for storage
    systems.

38
Historical Progress
39
Disk Drive Projections
40
Advanced Storage Roadmap
41
Disk Trends
  • SCSI still being developed, now at 320MB/sec
    transfer speed.
  • Serial ATA is expected to dominate the commodity
    disk connectivity market by end 2003. 150MB/sec
    moving to 300 MB/sec
  • Fiber channel products still expensive.
  • DVD solutions still 2-3x as expensive as disks.
    No industry experience managing large DVD
    libraries.

42
Some Overall Conclusions
  • Tape and Network trends match or exceed our
    initial needs.
  • Need to continue to leverage economies of scale
    to drive down long term costs.
  • CPU trends need to be carefully interpreted
  • The need for new performance measures are
    indicated.
  • Change in the desktop market might effect the
    server strategy.
  • Cost of manageability is an issue.
  • Disk trends continue to make a large (multi PB)
    disk cache technically feasible, but .
  • The true cost of such an object remains unclear,
    given the issues of reliability, manageability
    and the disk fabric chosen (NAS/SAN, iSCSI/FC
    etc etc)
  • File system access for a large disk cache (RFIO,
    StorageTank) is also unclear.
  • More architectural work is needed in the next 2
    years for the processing and handling of LHC
    data.
  • NAS/SAN models are converging, access patterns
    are unclear, many options for system
    interconnects.
  • Openlab ?
Write a Comment
User Comments (0)
About PowerShow.com