CHEP 2000, February 7 February 11, 2000, Padova Italy, Abstract 379' - PowerPoint PPT Presentation

Loading...

PPT – CHEP 2000, February 7 February 11, 2000, Padova Italy, Abstract 379' PowerPoint presentation | free to download - id: 175c5f-ZTAyO



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

CHEP 2000, February 7 February 11, 2000, Padova Italy, Abstract 379'

Description:

CHEP 2000, February 7 - February 11, 2000, Padova (Italy), Abstract 379. ... A second nodes receives the same event; allows testing of new L3 exes ... – PowerPoint PPT presentation

Number of Views:165
Avg rating:3.0/5.0
Slides: 31
Provided by: chep200
Learn more at: http://chep2000.pd.infn.it
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: CHEP 2000, February 7 February 11, 2000, Padova Italy, Abstract 379'


1
The D0 DAQ and its Operational Control
Gennady Briskin, Brown University
G. Briskin D. Cutts A. Karachintsev S.
Mattingly G. Watts R. Zeller
  • CHEP 2000, February 7 - February 11, 2000,
    Padova (Italy), Abstract 379.

2
The DØ Trigger System
Detector
Trigger
Full
Readout
Information
DAQ
7 MHz
1 kHz
Level 1
Level 3
10 kHz
50-70 Hz
Level 2
C/R
1 kHz
RIP
Tape Output
3
Run II DAQ Numbers
  • Readout channels
  • Will be 800,000 in Run 2
  • ? lt250 kBytesgt/event
  • Data rates
  • 60-80 readout crates
  • Many to one
  • All GS to one of 64 L3 farm nodes
  • Initial design capacity
  • 1000 Hz
  • 250 MBytes/sec into the DAQ/l3-farm
  • Staged upgrades to more than 3 GB/sec

4
L3/DAQ Hardware Glossary
Digitizing crate
VBD
VME Buffer/Driver
VBD Readout Concentrator
VRC

Segment Bridge Fibre Channel Interface Event Tag
Interface Data Cable Interface
FCI ETI DCI
SB
ETG
Event Tag Generator
MPM
Multiport Memory
Level-3 filter node
5
L3/DAQ Data Path
data collection path
VBD
VBD
VBD
o o o
VRC
primary FC Path
(1 of 8)
VBD
VBD
VBD
o o o
L3 data distribution to Level 3 farm
Levels 1,2
Levels 1,2
ETG
trigger data
trigger data
MPM
MPM
o o o
event tag
event tag
  • direct flow from VBD to MPM
  • each data block flows freely and independently
  • blocks for different events flow simultaneously
    and asynchronously
  • recirculation loops allow maximum use of data
    path bandwidth
  • Segment Bridges use Event Tag data to perform
    complex realtime routing decisions

path
path
MPM
MPM
o o o
MPM
MPM
o o o
MPM
MPM
o o o
(stage C)
6
Front End Token Readout Loop
Front End Token Readout Loop
Front End Crate
Front End Crate
Front End Crate
Front End Crate
VRC 1
VRC 8
Front End Crate
Front End Crate
Primary Fiber Channel Loop 8
Front End Crate
Front End Crate
Primary Fiber Channel Loop 1
SB 1
SB 4
Event Tag Loop
Ethernet
Ethernet
ETG
To Collector Router
To Collector Router
Trigger Framework
Segment Data Cables
Segment Data Cables
)
)
7
DAQ Architecture
  • Data flow is unidirectional
  • No complex setup communication is required
  • Each packet is sent only once
  • No retransmission if packet error is detected
  • Routing is driven by the contents of the VBD
    header
  • No communication with front end crates
  • Sender does not know or care where data is going
  • Designed for expansion beyond the run 2 era

8
VME Buffer Driver Module
  • Digitized data enters DAQ system via a VBD module
  • Typically 4-6 Kbytes per VBD/event
  • Components
  • VME interface with list processing DMA
  • Dual SRAM buffers
  • External data cable interface with token
    arbitration logic.
  • Performance
  • VME BLK mode DMA at 25-30 MB/sec
  • Data cable output 48 MB/sec.
  • Token arbitration time lt10 usec.

9
VBD Readout Concentrator
  • Bridges VBD readout loops to Primary Fibre
    Channel loop
  • Provides readout token generation and general
    front end loop management e.g. initialization,
    resets, and diagnostics
  • Primary loop interface via full-duplex Fibre
    channel port
  • transmits data to first Segment Bridges (SBs),
    receives recirculated packets from last SB

10
VBD Readout Concentrator
HD C A B L E
Term FB Trans
64/26
64/26
VRC 1
HD C A B L E
HD C A B L E
Term FB Trans
64/26
PCI
DP Mem
64/26
Buffer Mem
i960
FC P O R T
FC out FC in
PCI/66 Pentium XXX 128Mb Ethernet
HD C A B L E
Term FB Trans
64/26
64/26
PCI
VRC 2
HD C A B L E
HD C A B L E
Term FB Trans
64/26
DP Mem
64/26
Buffer Mem
i960
FC P O R T
FC out FC in
11
The Event Tag Generator
  • Receives L2 Accept Trigger Information
  • Correlates Trigger Framework information with
    preprogrammed parameters for Level-3 node
    configuration, and then classifies event as
  • generic any segment, any node
  • targeted specific segment/node
  • special function diagnostic, broadcast, etc.
  • Creates corresponding event tag that instructs
    SBs on how to handle data from that event.
  • Places event tag in primary tag queue and
    transmits it when tag queue is available.
  • Queue fill status is used to provide rate
    limiting feedback to Level 1 triggers
  • L1 disable lines

12
ETG System
  • Links L3/DAQ with the L1/L2 trigger systems
  • Uses trigger information for a given event to
    create an event tag which enables complex data
    block routing in real time

13
Segment Bridge
  • SB routes data from the VRCs to a L3 Node(s)
  • Bridge Primary Fibre Channel loop to Level-3 data
    cables
  • Each Level-3 Segment has a Segment Bridge
  • ETG circulates event tag information to each SB.
  • ETG/SB is capable of complex node assignments
  • Node assignment based on event type (calibration,
    etc)
  • Shadow Nodes
  • A second nodes receives the same event allows
    testing of new L3 exes
  • Allows partitioning into several simultaneous
    data taking runs
  • SBs are serially linked together
  • One can expand the number of farm segments

14
Event-Node Assignment
Trigger-bit event type lookup tables
ETG
L2 Trigger Bits
  • Node Readies
  • Node-Event type lookup table

SB
Event Tag
Event Type
  • Regular
  • CAL_CALIB
  • Regular Shadow 2
  • COSMIC

Node Assignment
15
Segment Bridge
  • Receives Event Tag from ETG
  • accepts/declines the event based on
    characteristics of available L3 nodes

16
L3 Farm Nodes
L3 Node Framework
VME Crate
Each 48 MB/s
Control, Monitoring and Error Module
MPM
Node-VME I/O Module
100-BaseT Ethernet
MPM
DMA capable VME-PCI Bridge
Shared Memory Buffers
Collector Router Module
L3 Filter Interface Module
  • Input rate starts at 1000 Hz
  • Output rate is 50-70 Hz
  • Intel Based SMP System
  • Windows NT
  • As many L3 Filters as possible
  • Prototype hardware in hand

Dedicated 100 Mbits/s Ethernet to Online
Collector/Router
17
L3 Node FrameWork
Control
Pool Queue
MPM Reader
Event Buffer Shared Memory
  • Get a pointer to an event buffer
  • Configures MPMs for receiving new event
  • Wait till complete event arrives into MPM
  • Load event data into shared memory buffer
  • Insert event pointer into the next queue

Data
L3 Filter Input Interface
Validation Queue
L3 Filter Process
Process Interface
Event Validation
  • FECs presence validation
  • Checksum validation

Filter Queue
Process Interface
Output Events Queue
Output Pool Queue
Command/Monitor/Error Shared Memory
L3 Monitor Interface
Collector/Router Network Interface
L3 Error Interface
  • Determine where this event should be sent
  • Sent event to collector/router node

Data to Online Host System
18
The L3 DAQ Control System Goals
  • High speed configuration changes
  • Run start/stop
  • Physics Run/Calibration Run Modes
  • Diagnose problems quickly

19
Environment
  • All nodes will use Commodity Hardware Windows
    OS
  • C
  • MS Visual Studio 6.0
  • Works well, IDE is a blessing.
  • Visual Source Safe for source control
  • Very tightly integrated into the MS Visual Studio
    IDE
  • Communication
  • ACE as the cross platform wrapper around TCP/IP
    stack
  • itc package to manage messages and event
    transport with D0 Online system (Fermi CD)
  • DCOM for internal control of DAQ components
  • XML message formatting for internal
    control/configuration of DAQ components

20
The Level 3 DAQ/Trigger system
Collector Router
Online System
L3 Supervisor
L3 Monitor
SC
SC
SC
SB
ETG
21
DAQ Components States
  • Every DAQ component can be in 1 of the 6 states

Start Run
Boot
Finish Run
Idle
Shut- down
Pause Run
22
Auto Start System
Auto Start Server
Configuration
(COM Object)
Change Packages, Get Status, Reboot, etc.
Configuration Database
Package Database
Web Server
Client Machine
Auto Start Service
Get Package List
Package
Install Packages
Running Packages
23
The Level 3 DAQ/Supervisor System
  • Translate D0 online system commands into L3
    specific commands
  • Begin Run
  • End Run
  • Run Script ele_high on L2 Trigger bit 2 on 6
    nodes of type REGULAR.
  • Report configuration errors (synchronous)
  • Must track a single online system command as it
    is distributed to DAQ components
  • Maintain DAQ System state

24
L3 Supervisor Interface
Supervisor
Online System
COOR Command Interface
COOR
Commands
Configuration Request
Direct Commands
Current Configuration DB
Resource Allocator
Desired Configuration Data Base
Command Generator Sequencer
Commands
25
Monitor System
26
VRC Interface
FCI from last SB
DCOM
DCOM
27
ETG Interface
DCOM
DCOM
28
Segment Bridge Interface
FCI from VRC or SB
100 MB/s
SB Node
SB Programs
FCI to next SB
Embedded Systems
Embedded Systems
Embedded Systems
100 MB/s
Control
Disk
MPM Data Cables
DCOM
DCOM
L3 Supervisor
L3 Monitor
29
Conclusion
  • Natural upgrade of the Run I DAQ
  • Allowed us to put together the framework for the
    L3 farm node with modest amounts of hardware
    effort
  • New DAQ hardware permits gt 10 increase in
    bandwidth
  • DAQ control system is designed to be modular
  • makes use of industry standard software

30
The valiant warriors of D0 are strong thanks to a
magic potion prepared by druid Getafix and this
recipe is secret (all we know is that it has
lobster and beer)
About PowerShow.com