AMS TIM, CERN Jul 23, 2004 - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

AMS TIM, CERN Jul 23, 2004

Description:

Title: CERN's New Mail Server Subject: Mail Server Author: AK Last modified by: alexei Created Date: 9/12/1995 11:22:24 AM Document presentation format – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 34
Provided by: AK
Category:
Tags: ams | cern | tim | jul

less

Transcript and Presenter's Notes

Title: AMS TIM, CERN Jul 23, 2004


1
AMS TIM, CERN Jul 23, 2004
AMS Computing and Ground Centers
Alexei Klimentov Alexei.Klimentov_at_cern.ch
2
AMS Computing and Ground Data Centers
  • AMS-02 Ground Centers
  • AMS centers at JSC
  • Ground data transfer
  • Science Operation Center prototype
  • Hardware and Software evaluation
  • Implementation plan
  • AMS/CERN computing and manpower issues
  • MC Production Status
  • AMS-02 MC (2004A)
  • Open questions
  • plans for Y2005
  • AMS-01 MC

3
AMS-02 Ground Support Systems
  • Payload Operations Control Center (POCC) at CERN
    (first 2-3 months in Houston)
  • CERN Bldg.892 wing A
  • control room, usual source of commands
  • receives Health Status (HS), monitoring and
    science data in real-time
  • receives NASA video
  • voice communication with NASA flight operations
  • Backup Control Station at JSC (TBD)
  • Monitor Station in MIT
  • backup of control room
  • receives Health Status (HS) and monitoring
    data in real-time
  • voice communication with NASA flight
    operations
  • Science Operations Center (SOC) at CERN (first
    2-3 months in Houston)
  • CERN Bldg.892 wing A
  • receives the complete copy of ALL data
  • data processing and science analysis
  • data archiving and distribution to Universities
    and Laboratories
  • Ground Support Computers (GSC) at Marshall Space
    Flight Center
  • receives data from NASA -gt buffer -gt retransmit
    to Science Center
  • Regional Centers Madrid, MIT, Yale, Bologna,
    Milan, Aachen, Karlsruhe, Lyon, Taipei, Nanjing,
    Shanghai,

4
AMS facilities
NASA facilities
5
(No Transcript)
6
AMS Ground Centers at JSC
  • Requirements to AMS Ground Systems at JSC
  • Define AMS GS HW and SW components
  • Computing facilities
  • ACOP flight
  • AMS pre-flight
  • AMS flight
  • after 3 months
  • Data storage
  • Data transmission
  • Discussed with NASA in Feb 2004

http//ams.cern.ch/Computing/pocc_JSC.pdf
7
AMS-02 Computing facilities at JSC
Center Location Function(s) Computers Qty
POCC Bldg.30, Rm 212 Commanding Telemetry Monitoring On-line processing Pentium MS Win Pentium Linux 19 Monitors Networking switches Terminal Console MCC WS 4 28 19 8 2 2
SOC Bldg.30, Rm 3301 Data Processing Data Analysis Data, Web, News Servers Data Archiving Pentium Linux IBM LTO tapes drives Networking switches 17 color monitors Terminal console 35 2 10 5 2
Terminal room tbd Notebooks, desktops 100
AMS CSR Bldg.30M, Rm 236 Monitoring Pentium Linux 19 color monitor MCC WS 2 2 1
8
AMS Computing at JSC (TBD)
Year Responsible Actions
LR-8 months N.Bornas, P.Dennett, A.Klimentov, A.Lebedev, B.Robichaux, G.Carosi Set-up at JSC the basic version of the POCC Conduct tests with ACOP for commanding and data transmission
LR-6 months P.Dennett, A.Eline, P.Fisher, A.Klimentov, A.Lebedev, A.Eline, Finns (?) Set-up POCC basic version at CERN Set-up AMS monitoring station in MIT Conduct tests with ACOP/MSFC/JSC commanding and data transmission
LR A.Klimentov, B.Robichaux Set-up POCC flight configuration at JSC
LR L 2 weeks V.Choutko, A.Eline, A.Klimentov, B.Robichaux A.Lebedev, P.Dennett Set-up SOC flight configuration at JSC Set-up terminal room and AMS CSR Commanding and data transmission verification
L2 months (tbd) A.Klimentov Set-up POCC flight configuration at CERN Move part of SOC computers from JSC to CERN Set-up SOC flight configuration at CERN
L3 months (tbd) A.Klimentov, A.Lebedev, A.Eline, V.Choutko Activate AMS POCC at CERN Move all SOC equipment to CERN Set-up AMS POCC basic version at JSC
LR launch ready date Sep 2007, L AMS-02
launch date
9
Data Transmission
High Rate Data Transfer between MSFC Al and
POCC/SOC, POCC and SOC, SOC and Regional
centers will become a paramount importance
  • Will AMS need a dedicated line to send data from
    MSFC to ground centers or the public Internet can
    be used ?
  • What Software (SW) must be used for a bulk data
    transfer and how reliable is it ?
  • What data transfer performance can be achieved ?
  • G.Carosi ,A.Eline,P.Fisher, A.Klimentov

10
Global Network Topology
11
(No Transcript)
12
(No Transcript)
13
amsbbftp tests CERN/MIT CERN/SEU Jan/Feb 2003
A.Elin, A.Klimentov, K.Scholberg and J.Gong
14
Data Transmission Tests (conclusions)
  • In its current configuration Internet provides
    sufficient bandwidth to transmit AMS data from
    MSFC Al to AMS ground centers at rate approaching
    9.5 Mbit/sec
  • We are able to transfer and store data on a high
    end PC reliably with no data loss
  • Data transmission performance is comparable of
    what achieved with network monitoring tools
  • We can transmit data simultaneously to multiple
    cites

15
Data and Computation for Physics Analysis
event filter (selection reconstruction)
detector
processed data (event summary data ESD/DST)
event tags data
raw data
batch physics analysis
event reconstruction
analysis objects (extracted by physics topic)
event simulation
interactive physics analysis
16
Symmetric Multi-Processor (SMP) Model
Experiment
Tape Storage
TeraBytes of disks
17
AMS SOC (Data Production requirements)
Complex system that consists of computing
components including I/O nodes, worker nodes,
data storage and networking switches. It should
perform as a single system. Requirements
  • Reliability High (24h/day, 7days/week)
  • Performance goal process data quasi-online
    (with typical delay lt 1 day)
  • Disk Space 12 months data online
  • Minimal human intervention (automatic data
    handling, job control and book-keeping)
  • System stability months
  • Scalability
  • Price/Performance

18
Production Farm Hardware Evaluation
Processing node
Processor Intel PIV 3.4GHz, HT
Memory 1 GB
System disk and transient data storage 400 GB , IDE disk
Ethernet cards 2x1 GBit
Estimated Cost 2500 CHF
disk server
Processor Intel Pentium dual-CPU Xeon 3.2GHz
Memory 2 GB
System disk SCSI 18 GB double redundant
Disk storage 3x10x400 GB RAID 5 array or 4x8x400 GB RAID 5 array Effective disk volume 11.6 TB
Ethernet cards 3x1 GBit
Estimated cost 33000 CHF (or 2.85 CHF/GB)
19
AMS-02 Ground Centers.Science Operations Center.
Computing Facilities.
Analysis Facilities (linux cluster)
Central Data Services
Shared Tape Servers
AMS regional Centers
Interactive and Batch physics analysis
tape robots tape drives LTO, DLT
10-20 dual processor PCs
5 PC servers
Shared Disk Servers
25 TeraByte disk 6 PC based servers
batch data processing
interactive physics analysis
CERN/AMS Network
20
AMS Science Operation Center Computing Facilities

Production Farm
Cell 7
PC Linux 3.4GHz
PC Linux 3.4GHz
PC Linux 3.4GHz
PC Linux 3.4GHz
PC Linux 3.4GHz
PC Linux 3.4GHz
Archiving and Staging (CERN CASTOR)
Gigabit Switch (1 Gbit/sec)
Gigabit Switch
AFS Server
PC Linux Server 2x3.4GHz, RAID 5 ,10TB
Cell 1
Gigabit Switch (1 Gbit/sec)
Web, News Production, DB servers
MC Data Server
AMS data NASA data metadata
Disk Server
Disk Server
Disk Server
Disk Server
PC Linux Server 2x3.4GHz
PC Linux Server 2x3.4GHz, RAID 5 ,10TB
Data Server
Simulated data
Tested, prototype in production Not tested and
no prototype yet
Analysis Facilities
21
AMS-02 Science Operations Center
  • Year 2004
  • MC Production (18 AMS Universites and Labs)
  • SW Data processing, central DB, data mining,
    servers
  • AMS-02 ESD format
  • Networking (A.Eline, Wu Hua, A.Klimentov)
  • Gbit private segment and monitoring SW in
    production since April
  • Disk servers and data processing (V.Choutko,
    A.Eline, A.Klimentov)
  • dual-CPU Xeon 3.06 GHz 4.5 TB disk space in
    production since Jan
  • 2nd server dual-CPU Xeon 3.2 GHz, 9.5 TB will
    be installed in Aug (3 CHF/GB)
  • data processing node PIV single CPU 3.4 GHz
    Hyper-Threading mode in production since Jan
  • Datatransfer station (Milano group M.Boschini,
    D.Grandi,E.Micelotta and A.Eline)
  • Data transfer to/from CERN (used for MC
    production)
  • Station prototype installed in May
  • SW in production since January
  • Status report on next AMS TIM

22
AMS-02 Science Operations Center
  • Year 2005
  • Q 1 SOC infrastructure setup
  • Bldg.892 wing A false floor, cooling,
    electricity
  • Mar 2005 setup production cell prototype
  • 6 processing nodes 1 disk server with private
    Gbit ethernet
  • LR-24 months (LR launch ready date) Sep 2005
  • 40 production farm prototype (1st bulk
    computers purchasing)
  • Database servers
  • Data transmission tests between MSFC AL and CERN

23
AMS-02 Computing Facilities .
Function Computer Qty Disks (Tbytes) and Tapes Ready() LR-months
GSC_at_MSFC Intel (AMD) dual-CPU, 2.5GHz 3 3x0.5TB Raid Array LR-2
POCC POCC prototype_at_JSC Intel and AMD, dual-CPU, 2.8GHz 45 6 TB Raid Array LR
Monitor Station in MIT Intel and AMD, dual-CPU, 2.8GHz 5 1 TB Raid Array LR-6
Science Operation Centre
Production Farm Intel and AMD, dual-CPU , 2.8GHz 50 10 TB Raid Array LR-2
Database Servers dual-CPU 2.8 GHz Intel or Sun SMP 2 0.5TB LR-3
Event Storage and Archiving Disk Servers dual-CPU Intel 2.8GHz 6 50 Tbyte Raid Array Tape library (250 TB) LR
Interactive and Batch Analysis SMP computer, 4GB RAM, 300 Specint95 or Linux farm 10 1 Tbyte Raid Array LR-1
Ready operational, bulk of CPU and disks
purchasing LR-9 Months
24
People and Tasks (my incomplete list) 1/4
AMS-02 GSC_at_MSFC
  • A.Mujunen,J.Ritakari, P.Fisher,A.Klimentov
  • A.Mujunen, J.Ritakari
  • A.Klimentov, A.Elin
  • MIT, HUT
  • MIT
  • Architecture
  • POIC/GSC SW and HW
  • GSC/SOC data transmission SW
  • GSC installation
  • GSC maintenance

Status Concept was discussed with MSFC
Reps MSFC/CERN, MSFC/MIT data transmission tests
done HUT have no funding for Y2004-2005
25
People and Tasks (my incomplete list) 2/4
AMS-02 POCC
P.Fisher, A.Klimentov, M.Pohl P.Dennett,
A.Lebedev, G.Carosi, A.Klimentov,
A.Lebedev G.Carosi V.Choutko, A.Lebedev V.Choutko
, A.Klimentov More manpower will be needed
starting LR-4 months
  • Architecture
  • TReKGate, AMS Cmd Station
  • Commanding SW and Concept
  • Voice and Video
  • Monitoring
  • Data validation and online processing
  • HW and SW maintenance

26
People and Tasks (my incomplete list) 3/4
AMS-02 SOC
V.Choutko, A.Klimentov, M.Pohl V.Choutko,
A.Klimentov A.Elin, V.Choutko, A.Klimentov M.Bosch
ini et al, A.Klimentov More manpower will be
needed starting from LR 4 months
  • Architecture
  • Data Processing and Analysis
  • System SW and HEP appl.
  • Book-keeping and Database
  • HW and SW maintenance

Status SOC Prototyping is in progress SW
debugging during MC production Implementation
plan and milestones are fulfilled
27
People and Tasks (my incomplete list) 4/4
AMS-02 Regional Centers
PG Rancoita et al G.Coignet and
C.Goy J.Gong Z.Ren T.Siedenburg M.Pohl,
A.Klimentov
  • INFN Italy
  • IN2P3 France
  • SEU China
  • Academia Sinica
  • RWTH Aachen
  • AMS_at_CERN

Status Proposal prepared by INFN groups for
IGS and J.Gong/A.Klimentov for CGS can be used
by other Universities. Successful tests of
distributed MC production and data transmission
between AMS_at_CERN and 18 Universities. Data
transmission, book-keeping and process
communication SW (M.Boschini, V.Choutko, A.Elin
and A.Klimentov) released.
28
AMS/CERN computing and manpower issues
  • AMS Computing and Networking requirements
    summarized in Memo
  • Nov 2005 AMS will provide a detailed SOC and
    POCC implementation plan
  • AMS will continue to use its own computing
    facilities for data processing and analysis, Web
    and News services
  • There is no request to IT for support for AMS
    POCC HW or SW
  • SW/HW first line expertise will be provided by
    AMS personnel
  • Y2005 2010 AMS will have guaranteed bandwidth
    of USA/Europe line
  • CERN IT-CS support in case of USA/Europe line
    problems
  • Data Storage AMS specific requirements will be
    defined in annual basis
  • CERN support of mails, printing, CERN AFS as for
    LHC experiments. Any license fees will be paid by
    AMS collaboration according to IT specs
  • IT-DB, IT-CS may be called for consultancy within
    the limits of available manpower

Starting from LR-12 months the Collaboration
will need more people to run computing
facilities
29
Year 2004 MC Production
  • Started Jan 15, 2004
  • Central MC Database
  • Distributed MC Production
  • Central MC storage and archiving
  • Distributed access (under test)
  • SEU Nanjing, IAC Tenerife,
  • CNAF Italy joined production
  • since Apr 2004

30
Y2004 MC production centers
MC Center Responsible GB
CIEMAT J.Casuas 2045 24.3
CERN V.Choutko, A.Eline,A.Klimentov 1438 17.1
Yale E.Finch 1268 15.1
Academia Sinica Z.Ren, Y.Lei 1162 13.8
LAPP/Lyon C.Goy, J.Jacquemier 825 9.8
INFN Milano M.Boschini, D.Grandi 528 6.2
CNAF INFN Bologna D.Casadei 441 5.2
UMD A.Malinine 210 2.5
EKP, Karlsruhe V.Zhukov 202 2.4
GAM, Montpellier J.Bolmont, M.Sapinski 141 1.6
INFN SienaPerugia, ITEP, LIP, IAC, SEU, KNU P.Zuccon, P.Maestro, Y.Lyublev, F.Barao, C.Delgado, Ye Wei, J.Shin 135 1.6
31
MC Production Statistics
185 days, 1196 computers 8.4 TB, 250 PIII 1
GHz/day
Particle Million Events of Total
protons 7630 99.9
helium 3750 99.6
electrons 1280 99.7
positrons 1280 100
deuterons 250 100
anti-protons 352.5 100
carbon 291.5 97.2
photons 128 100
Nuclei (Z 328) 856.2 85
97 of MC production done Will finish by end of
July
URL pcamss0.cern.ch/mm.html
32
Y2004 MC Production Highlights
  • Data are generated at remote sites, transmitted
    to AMS_at_CERN and available for the analysis (only
    20 of data was generated at CERN)
  • Transmission, process communication and
    book-keeping programs have been debugged, the
    same approach will be used for AMS-02 data
    handling
  • 185 days of running (97 stability)
  • 18 Universities Labs
  • 8.4 Tbytes of data produced, stored and archived
  • Peak rate 130 GB/day (12 Mbit/sec), average 55
    GB/day (AMS-02 raw data transfer 24 GB/day)
  • 1196 computers
  • Daily CPU equiv 250 1 GHz CPUs running 184
    days/24h
  • Good simulation of AMS-02 Data Processing and
    Analysis
  • Not tested yet
  • Remote access to CASTOR
  • Access to ESD from personal desktops
  • TBD AMS-01 MC production, MC production in
    Y2005

33
AMS-01 MC Production
Send request to vitaly.choutko_at_cern.ch Dedicated
meeting in Sep, the target date to start AMS-01
MC production October 1st
Write a Comment
User Comments (0)
About PowerShow.com