High Level Triggering - PowerPoint PPT Presentation

1 / 80
About This Presentation
Title:

High Level Triggering

Description:

Dead-time was not the only problem. Experiments focussed on ... Based on coarse granularity. data from calo and mu only. Typically, there are. 1-2 RoI/event ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 81
Provided by: fredwi2
Category:
Tags: high | level | study | time | triggering

less

Transcript and Presenter's Notes

Title: High Level Triggering


1
High Level Triggering
  • Fred Wickens

2
High Level Triggering (HLT)
  • Introduction to triggering and HLT systems
  • What is Triggering
  • What is High Level Triggering
  • Why do we need it
  • Case study of ATLAS HLT ( some comparisons with
    other experiments)
  • Summary

3
Simple trigger for spark chamber set-up
4
Dead time
  • Experiments frozen from trigger to end of readout
  • Trigger rate with no deadtime R per sec.
  • Dead time / trigger t sec.
  • For 1 second of live time 1 Rt seconds
  • Live time fraction 1/(1 Rt)
  • Real trigger rate R/(1 Rt) per sec.

5
Trigger systems 1980s and 90s
  • bigger experiments ? more data per event
  • higher luminosities ? more triggers per second
  • both led to increased fractional deadtime
  • use multi-level triggers to reduce dead-time
  • first level - fast detectors, fast algorithms
  • higher levels can use data from slower detectors
    and more complex algorithms to obtain better
    event selection/background rejection

6
Trigger systems 1990s and 2000s
  • Dead-time was not the only problem
  • Experiments focussed on rarer processes
  • Need large statistics of these rare events
  • But increasingly difficult to select the
    interesting events
  • DAQ system (and off-line analysis capability)
    under increasing strain - limiting useful event
    statistics
  • This is a major issue at hadron colliders, but is
    also significant at ILC
  • Use the High Level Trigger to reduce the
    requirements for
  • The DAQ system
  • Off-line data storage and off-line analysis

7
Summary of ATLAS Data Flow Rates
  • From detectors gt 1014 Bytes/sec
  • After Level-1 accept 1011 Bytes/sec
  • Into event builder 109 Bytes/sec
  • Onto permanent storage 108 Bytes/sec ?
    1015 Bytes/year

8
TDAQ Comparisons
9
The evolution of DAQ systems
10
Typical architecture 2000
11
Level 1 (Sometimes called Level-0 - LHCb)
  • Time one ? very few microseconds
  • Standard electronics modules for small systems
  • Dedicated logic for larger systems
  • ASIC - Application Specific Integrated Circuits
  • FPGA - Field Programmable Gate Arrays
  • Reduced granularity and precision
  • calorimeter energy sums
  • tracking by masks
  • Event data stored in front-end electronics (at
    LHC use pipeline as collision rate shorter than
    Level-1 decision time)

12
Level 2
  • 1) few microseconds (10-100)
  • hardwired, fixed algorithm, adjustable parameters
  • 2) few milliseconds (1-100)
  • Dedicated microprocessors, adjustable algorithm
  • 3-D, fine grain calorimetry
  • tracking, matching
  • Topology
  • Different sub-detectors handled in parallel
  • Primitives from each detector may be combined in
    a global trigger processor or passed to next level

13
Level 2 - contd
  • 3) few milliseconds (10-100) - 2006
  • Processor farm with Linux PCs
  • Partial events received with high-speed network
  • Specialised algorithms
  • Each event allocated to a single processor, large
    farm of processors to handle rate
  • If separate Level 2 data from each event stored
    in many parallel buffers (each dedicated to a
    small part of the detector)

14
Level 3
  • millisecs to seconds
  • processor farm
  • microprocessors/emulators/workstations
  • Now standard server PCs
  • full or partial event reconstruction
  • after event building (collection of all data from
    all detectors)
  • Each event allocated to a single processor, large
    farm of processors to handle rate

15
Summary of Introduction
  • For many physics analyses, aim is to obtain as
    high statistics as possible for a given process
  • We cannot afford to handle or store all of the
    data a detector can produce!
  • What does the trigger do
  • select the most interesting events from the
    myriad of events seen
  • I.e. Obtain better use of limited output
    band-width
  • Throw away less interesting events
  • Keep all of the good events(or as many as
    possible)
  • But note must get it right - any good events
    thrown away are lost for ever!
  • High level trigger allows much more complex
    selection algorithms

16
Case study of the ATLAS HLT system
  • Concentrate on issues relevant forATLAS (CMS
    very similar issues), but try to address some
    more general points

17
Starting points for any HLT system
  • physics programme for the experiment
  • what are you trying to measure
  • accelerator parameters
  • what rates and structures
  • detector and trigger performance
  • what data is available
  • what trigger resources do we have to use it

18
Physics at the LHC
Interesting events are buried in a seaof soft
interactions
B physics
High energy QCD jet production
top physics
Higgs production
19
The LHC and ATLAS/CMS
  • LHC has
  • design luminosity 1034 cm-2s-1 (In 2008 from
    1031 - 1033 ?)
  • bunch separation 25 ns (bunch length 1 ns)
  • This results in
  • 23 interactions / bunch crossing
  • 80 charged particles (mainly soft pions) /
    interaction
  • 2000 charged particles / bunch crossing
  • Total interaction rate 109 sec-1
  • b-physics fraction 10-3 106 sec-1
  • t-physics fraction 10-8 10 sec-1
  • Higgs fraction 10-11 10-2 sec-1

20
Physics programme
  • Higgs signal extraction important but very
    difficult
  • Also there is lots of other interesting physics
  • B physics and CP violation
  • quarks, gluons and QCD
  • top quarks
  • SUSY
  • new physics
  • Programme will evolve with luminosity, HLT
    capacity and understanding of the detector
  • low luminosity (first 2 years)
  • high PT programme (Higgs etc.)
  • b-physics programme (CP measurements)
  • high luminosity
  • high PT programme (Higgs etc.)
  • searches for new physics

21
Trigger strategy at LHC
  • To avoid being overwhelmed use signatures with
    small backgrounds
  • Leptons
  • High mass resonances
  • Heavy quarks
  • The trigger selection looks for events with
  • Isolated leptons and photons,
  • ?-, central- and forward-jets
  • Events with high ET
  • Events with missing ET

22
Example Physics signatures
Objects Physics signatures
Electron 1egt25, 2egt15 GeV Higgs (SM, MSSM), new gauge bosons, extra dimensions, SUSY, W, top
Photon 1?gt60, 2?gt20 GeV Higgs (SM, MSSM), extra dimensions, SUSY
Muon 1µgt20, 2µgt10 GeV Higgs (SM, MSSM), new gauge bosons, extra dimensions, SUSY, W, top
Jet 1jgt360, 3jgt150, 4jgt100 GeV SUSY, compositeness, resonances
Jet gt60 ETmiss gt60 GeV SUSY, leptoquarks
Tau gt30 ETmiss gt40 GeV Extended Higgs models, SUSY
23
ARCHITECTURE
Trigger
DAQ
1 PB/s(equivalent)
40 MHz
Three logical levels
Hierarchical data-flow
LVL1 - FastestOnly Calo and MuHardwired
On-detector electronics Pipelines
2 ms
LVL2 - LocalLVL1 refinement track association
Event fragments buffered in parallel
10 ms
LVL3 - Full eventOffline analysis
Full event in processor farm
1 sec.
24
Selected (inclusive) signatures
25
Trigger design - Level-1
  • Level-1
  • sets the context for the HLT
  • reduces triggers to 75 kHz
  • has a very short time budget
  • few micro-sec (ATLAS/CMS 2.5 - much used up in
    cable delays!)
  • Detectors used must provide data very promptly,
    must be simple to analyse
  • Coarse grain data from calorimeters
  • Fast parts of muon spectrometer (I.e. not
    precision chambers)
  • NOT precision trackers - too slow, too complex
  • (LHCb does use some simple tracking data from
    their VELO detector to veto events with more than
    1 primary vertex)
  • Proposed FP420 detectors provide data too late

26
ATLAS Level-1 trigger system
  • Calorimeter and muon
  • trigger on inclusive signatures
  • muons
  • em/tau/jet calo clusters missing and sum ET
  • Hardware trigger
  • Programmable thresholds
  • Selection based on multiplicities and thresholds

27
ATLAS em cluster trigger algorithm
Sliding window algorithm repeated for each of
4000 cells
28
ATLAS Level 1 Muon trigger
RPC - Trigger Chambers - TGC
Measure muon momentum with very simple tracking
in a few planes of trigger chambers

RPC Restive Plate Chambers TGC Thin Gap
Chambers MDT Monitored Drift Tubes
29
Level-1 Selection
  • The Level-1 trigger - an or of a large number
    of inclusive signals - set to match the current
    physics priorities and beam conditions
  • Precision of cuts at Level-1 is generally limited
  • Adjust the overall Level-1 accept rate (and the
    relative frequency of different triggers) by
  • Adjusting thresholds
  • Pre-scaling (e.g. only accept every 10th trigger
    of a particular type) higher rate triggers
  • Can be used to include a low rate of calibration
    events
  • Menu can be changed at the start of run
  • Pre-scale factors may change during the course of
    a run

30
Example Level-1 Menu for 2x1033
Level-1 signature Output Rate (Hz)
EM25i 12000
2EM15i 4000
MU20 800
2MU6 200
J200 200
3J90 200
4J65 200
J60 XE60 400
TAU25i XE30 2000
MU10 EM15i 100
Others (pre-scaled, exclusive, monitor, calibration) 5000
Total 25000
31
Trigger design - Level-2
  • Level-2 reduce triggers to 2 kHz
  • Note CMS does not have a physically separate
    Level-2 trigger, but the HLT processors include a
    first stage of Level-2 algorithms
  • Level-2 trigger has a short time budget
  • ATLAS 10 milli-sec average
  • Note for Level-1 the time budget is a hard limit
    for every event, for the High Level Trigger it is
    the average that matters, so a some events can
    take several times the average, provided thay are
    a minority
  • Full detector data is available, but to minimise
    resources needed
  • Limit the data accessed
  • Only unpack detector data when it is needed
  • Use information from Level-1 to guide the process
  • Analysis proceeds in steps with possibility to
    reject event after each step
  • Use custom algorithms

32
Regions of Interest
  • The Level-1 selection is dominated by local
    signatures (I.e. within Region of Interest - RoI)
  • Based on coarse granularity data from calo and
    mu only
  • Typically, there are 1-2 RoI/event
  • ATLAS uses RoIs to reduce network b/w and
    processing power required

33
Trigger design - Level-2 - contd
  • Processing scheme
  • extract features from sub-detector data in each
    RoI
  • combine features from one RoI into object
  • combine objects to test event topology
  • Precision of Level-2 cuts
  • Emphasis is on very fast algorithms with
    reasonable accuracy
  • Do not include many corrections which may be
    applied off-line
  • Calibrations and alignment available for trigger
    not as precise as ones available for off-line

34
ARCHITECTURE

FE Pipelines 2.5 ms
H L T

35
CMS Event Building
  • CMS perform Event Building after Level-1
  • This simplifies the architecture, but places much
    higher demand on technology
  • Network traffic 100 GB/s
  • Use Myrinet instead of GbE for the EB network
  • Plan a number of independent slices with barrel
    shifter to switch to a new slice at each event
  • Time will tell whichphilosophy is better

36
Example for Two electron trigger
LVL1 triggers on two isolated e/m clusters with
pTgt20GeV (possible signature Zgtee)
  • HLT Strategy
  • Validate step-by-step
  • Check intermediate signatures
  • Reject as early as possible

Sequential/modular approach facilitates early
rejection
37
Trigger design - Event Filter / Level-3
  • Event Filter reduce triggers to 200 Hz
  • Event Filter budget 1 sec average
  • Full event detector data is available, but to
    minimise resources needed
  • Only unpack detector data when it is needed
  • Use information from Level-2 to guide the process
  • Analysis proceeds in steps with possibility to
    reject event after each step
  • Use optimised off-line algorithms

38
Electron slice at the EF
TrigCaloRec
Wrapper of CaloRec
EFCaloHypo
Wrapper of newTracking
EF tracking
matches electromagnetic clusters with tracks and
builds egamma objects
EFTrackHypo
TrigEgammaRec
Wrapper of EgammaRec
EFEgammaHypo
39
HLT Processing at LHCb
40
Trigger design - HLT strategy
  • Level 2
  • confirm Level 1, some inclusive, some
    semi-inclusive,some simple topology triggers,
    vertex reconstruction(e.g. two particle mass
    cuts to select Zs)
  • Level 3
  • confirm Level 2, more refined topology
    selection,near off-line code

41
Example HLT Menu for 2x1033
HLT signature Output Rate (Hz)
e25i 40
2e15i lt1
gamma60i 25
2gamma20i 2
mu20i 40
2mu10 10
j400 10
3j165 10
4j110 10
j70 xE70 20
tau35i xE45 5
2mu6 with vertex, decay-length and mass cuts (J/psi, psi, B) 10
Others (pre-scaled, exclusive, monitor, calibration) 20
Total 200
42
Example B-physics Menu for 1033
  • LVL1
  • MU6 rate 24kHz (note there are large
    uncertainties in cross-section)
  • In case of larger rates use MU8 gt 1/2xRate
  • 2MU6
  • LVL2
  • Run muFast in LVL1 RoI 9kHz
  • Run ID recon. in muFast RoI mu6 (combined muon
    ID) 5kHz
  • Run TrigDiMuon seeded by mu6 RoI (or MU6)
  • Make exclusive and semi-inclusive selections
    using loose cuts
  • B(mumu), B(mumu)X, J/psi(mumu)
  • Run IDSCAN in Jet RoI, make selection for
    Ds(PhiPi)
  • EF
  • Redo muon reconstruction in LVL2 (LVL1) RoI
  • Redo track reconstruction in Jet RoI
  • Selections for B(mumu) B(mumuK) B(mumuPhi),
    BsDsPhiPi etc.

43
LHCb Trigger Menu
44
Matching problem
45
Matching problem (cont.)
  • ideally
  • off-line algorithms select phase space which
    shrink-wraps the physics channel
  • trigger algorithms shrink-wrap the off-line
    selection
  • in practice, this doesnt happen
  • need to match the off-line algorithm selection
  • For this reason many trigger studies quote
    trigger efficiency wrt events which pass off-line
    selection
  • BUT off-line can change algorithm, re-process and
    recalibrate at a later stage
  • SO, make sure on-line algorithm selection is well
    known, controlled and monitored

46
Selection and rejection
  • as selection criteria are tightened
  • background rejection improves
  • BUT event selection efficiency decreases

47
Selection and rejection
  • Example of a recent ATLAS Event Filter (I.e.
    Level-3) study of the effectiveness of various
    discriminants used to select 25 GeV electrons
    from a background of dijets

48
Other issues for the Trigger
  • Efficiency and Monitoring
  • In general need high trigger efficiency
  • Also for many analyses need a well known
    efficiency
  • Monitor efficiency by various means
  • Overlapping triggers
  • Pre-scaled samples of triggers in tagging mode
    (pass-through)
  • Final detector calibration and alignment
    constants not available immediately - keep as
    up-to-date as possible and allow for the lower
    precision in the trigger cuts when defining
    trigger menus and in subsequent analyses
  • Code used in trigger needs to be very robust -
    low memory leaks, low crash rate, fast
  • Beam conditions and HLT resources will evolve
    over several years (for both ATLAS and CMS)
  • In 2008 luminosity low, but also HLT capacity
    will be lt 50 of full system (funding constraints)

49
Summary
  • High-level triggers allow complex selection
    procedures to be applied as the data is taken
  • Thus allow large numbers of events to be
    accumulated, even in presence of very large
    backgrounds
  • Especially important at LHC - but significant at
    most accelerators
  • The trigger stages - in the ATLAS example
  • Level 1 uses inclusive signatures
  • muons em/tau/jet calo clusters missing and sum
    ET
  • Level 2 refines Level 1 selection, adds simple
    topology triggers, vertex reconstruction, etc
  • Level 3 refines Level 2 adds more refined
    topology selection
  • Trigger menus need to be defined, taking into
    account
  • Physics priorities, beam conditions, HLT
    resources
  • Include items for monitoring trigger efficiency
    and calibration
  • Must get it right - any events thrown away are
    lost for ever!

50
Additional Foils
51
(No Transcript)
52
The evolution of DAQ systems
53
ATLAS Detector
54
The ATLAS Sub-Detectors
  • Inner tracker
  • pixels (silicon)
  • (3 layers) precision 3-D points
  • 1.4 108 channels
  • occupancy 10-4
  • silicon strips
  • (4 layers) precision 2-D points
  • 5.2 106 channels
  • occupancy 10-2
  • transition radiation tracker (straw tubes)
  • (40 layers) continuous tracker electron
    identification
  • 4.2 105 channels
  • 12-33 occupancy

55
ATLAS Sub-Detectors (cont.)
  • solenoid - inside calorimeters
  • 4 m x 7 m x 1.8T
  • calorimetry
  • electromagnetic
  • liquid argon (accordion) lead
  • hadronic
  • scintillator tiles liquid argon iron
  • 2.3 105 channels
  • occupancy 5-15
  • muon system
  • air-core toroid magnet system
  • trigger - resistive plate and thin gap chambers
  • precision monitored drift tubes
  • 1.3 106 channels
  • occupancy 2-7.5

56
ATLAS event in the tracker
57
ATLAS event - tracker end-view
58
ATLAS event - tracker end-view
59
Trigger functional design
  • Level 1 Input 40 MHz Accept 75 kHz Latency
    2.5 µs
  • Inclusive triggers based on fast detectors
  • Muon, electron/photon, jet, sum and missing ET
    triggers
  • Coarse(r) granularity, low(er) resolution data
  • Special purpose hardware (FPGAs, ASICs)
  • Level 2 Input 75 (100) kHz Accept O(1) kHz
    Latency 10 ms
  • Confirm Level 1 and add track information
  • Mainly inclusive but some simple event topology
    triggers
  • Full granularity and resolution available
  • Farm of commercial processors with special
    algorithms
  • Event Filter Input O(1) kHz Accept O(100) Hz
    Latency secs
  • Full event reconstruction
  • Confirm Level 2 topology triggers
  • Farm of commercial processors using near off-line
    code

60
ATLAS Trigger / DAQ Data Flow
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
Data storage
SDX1
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests Delete commands
Requested event data
USA15
Regions Of Interest
USA15
Data of events accepted by first-level trigger
1600 Read- Out Links
UX15
150 PCs
VME
Dedicated links
ATLAS detector
Read- Out Drivers (RODs)
Read-Out Subsystems (ROSs)
First- level trigger
RoI Builder
UX15
Timing Trigger Control (TTC)
61
Events Eye View - step-1
  • At each beam crossing latch data into detector
    front end
  • After processing, data put into many parallel
    pipelines - moves along the pipeline at every
    bunch crossing, falls out the far end after 2.5
    microsecs
  • Also send calo mu trigger data to Level-1

62
Events Eye View - step-2
  • The Level-1 Central Trigger Processor combines
    the information from the Muon and Calo triggers
    and when appropriate generates the Level-1 Accept
    (L1A)
  • The L1A is distributed in real-time via the TTC
    system to the detector front-ends to send data
    from the accepted event to the detector RODs
    (Read-Out Drivers)
  • Note must arrive before data has dropped out of
    the pipe-line - hence hard dead-line of 2.5
    micro-secs
  • The TTC system (Trigger, Timing and Control) is a
    CERN system used by all of the LHC experiments.
    Allows very precise real-time data distribution
    of small data packets
  • Detector RODs receive data, process and reformat
    it as necessary and send via fibre links to TDAQ
    ROS

63
Events Eye View - Step-3
  • At L1A the different parts of LVL1 also send RoI
    data to the RoI Builder (RoIB), which combines
    the information and sends as a single packet to a
    Level-2 Supervisor PC
  • The RoIB is implemented as a number of VME boards
    with FPGAs to identify and combine the fragments
    coming from the same event from the different
    parts of Level-1

64
Step-4
ATLAS Level-2 Trigger
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
Region of Interest Builder (RoIB) passes
formatted information to one of the LVL2
supervisors. LVL2 supervisor selects one of the
processors in the LVL2 farm and sends it the RoI
information. LVL2 processor requests data from
the ROSs as needed (possibly in several steps),
produces an accept or reject and informs the LVL2
supervisor. Result of processing is stored in
pseudo-ROS (pROS) for an accept. Reduces network
traffic to 2 GB/s c.f. 150 GB/s if do full
event build LVL2 supervisor passes decision to
the DataFlow Manager (controls Event Building).
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
Data storage
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests
Requested event data
Event data for Level-2 pulled partial events _at_
100 kHz
Regions Of Interest
USA15
150 PCs
Read-Out Subsystems (ROSs)
RoI Builder
65
Step-5
ATLAS Event Building
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
For each accepted event the DataFlow Manager
selects a Sub-Farm Input (SFI) and sends it a
request to take care of the building of a
complete Event. The SFI sends requests to all
ROSs for data of the event to be built.
Completion of building is reported to the
DataFlow Manager. For rejected events and for
events for which event Building has completed the
DataFlow Manager sends "clears" to the ROSs (for
100 - 300 events Together). Network traffic for
Event Building is 5 GB/s
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
Data storage
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests Delete commands
Requested event data
Event data after Level-2 pulled full events _at_
3 kHz
Regions Of Interest
USA15
150 PCs
Read-Out Subsystems (ROSs)
RoI Builder
66
Step-6
ATLAS Event Filter
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
A process (EFD) running in each Event Filter farm
node collects each complete event from the SFI
and assigns it to one of a number of Processing
Tasks in that node The Event Filter uses more
sophisticated algorithms (near or adapted
off-line) and more detailed calibration data to
select events based on the complete event
data Accepted events are sent to SFO (Sub-Farm
Output) node to be written to disk
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
Data storage
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests Delete commands
Requested event data
Regions Of Interest
USA15
150 PCs
Read-Out Subsystems (ROSs)
RoI Builder
67
Step-7
ATLAS Data Output
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
The SFO nodes receive the final accepted events
and writes them to disk The events include
Stream Tags to support multiple simultaneous
files (e.g. Express Stream, Calibration,
b-physics stream, etc) Files are closed when
they reach 2 GB or at end of run Closed files
are finally transmitted via GbE to the CERN
Tier-0 for off-line analysis
Data storage
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests Delete commands
Requested event data
Regions Of Interest
USA15
150 PCs
Read-Out Subsystems (ROSs)
RoI Builder
68
ATLAS Trigger / DAQ Data Flow
CERN computer centre
SDX1
dual-socket server PCs
500
1600
100
30
Event Filter (EF)
Local Storage SubFarm Outputs (SFOs)
LVL2 farm
Event Builder SubFarm Inputs (SFIs)
Event rate 200 Hz
Second- level trigger
Data storage
SDX1
pROS
pROS
DataFlow Manager
Network switches
stores LVL2 output
stores LVL2 output
Network switches
LVL2 Super- visor
Gigabit Ethernet
Event data requests Delete commands
Requested event data
Event data pulled partial events _at_ 100 kHz,
full events _at_ 3 kHz
USA15
Regions Of Interest
USA15
Data of events accepted by first-level trigger
1600 Read- Out Links
UX15
150 PCs
VME
Dedicated links
ATLAS detector
Read- Out Drivers (RODs)
Read-Out Subsystems (ROSs)
First- level trigger
RoI Builder
UX15
Timing Trigger Control (TTC)
Event data pushed _at_ 100 kHz, 1600 fragments of
1 kByte each
69
HLT Hardware
Part of DAQ/HLT Pre-Series system, with full
LVL2 Farm Rack at right
70
ATLAS TDAQ Barrack Rack Layout
71
UA1 Trigger
  • Level 1 lt4 µs using hardwired processors
  • muon track segment em showers jets ET
  • rate 30 Hz. (reduction factor 103 ? 104)
  • zero deadtime as decision time lt bunch separation
  • Level 2 7 ms using 68020 CPUs
  • muon tracking using drift time
  • 3-D calorimetry position detectors
  • rate 3 Hz (reduction factor 10)
  • deadtime (30x0.007 20)
  • front end frozen during level 2 decision time.

72
UA1 Level 1
73
UA1 Level 2 and 3
74
UA1 Trigger (cont).
  • Level 3 100 ms using 3081E farm
  • partial event reconstruction
  • calorimeter and tracking
  • event topology
  • reduction factor 3
  • deadtime (3Hz x 0.03s 10)
  • time to read data into processor system (30 ms)

75
LEP (ALEPH)
  • luminosity 1031 /cm2/s
  • bunch separation 22 µs 45kHz (4 bunches) 11
    µs 90 kHz (8 bunches)
  • event rate 0.1 Hz
  • channels 106
  • read-out rate 1?3 Hz
  • transfer rate 10 Mbytes/sec

76
ALEPH trigger
  • Level 1
  • 4µs decision time 6 µs clear time (lt11µs )
  • hardwired processors
  • calorimeter energy sums and ITC tracks
  • accept rate 3? 30 Hz (5 Hz typ.)
  • zero deadtime as process time lt bunch separation

77
ALEPH trigger (cont.)
  • Level 2
  • 60µs decision plus clear time
  • hardwired LUT processor for TPC data
  • operates on L1 track triggers only.
  • accept rate 2 ? 6 Hz (2 Hz typ.)
  • deadtime 2bx x 5Hz(L1) / 45kHz 0.02, 5bx
    x 5Hz(L1) / 90kHz 0.03

78
ALEPH trigger (cont.)
  • Level 3
  • readout time 10ms.
  • processing time 1s/processor
  • microVAX farm (part reconstructed data)
  • accept rate 1?3 Hz (design rate 1Hz)
  • deadtime for readout 10ms x 2Hz(L2) 2

79
(No Transcript)
80
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com