ATLAS calibration and alignment strategy - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

ATLAS calibration and alignment strategy

Description:

CPU requirements generally modest (subdetector workstations? ... Output of data to dedicated calibration streams, little CPU-intensive work ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 11
Provided by: richardh78
Category:

less

Transcript and Presenter's Notes

Title: ATLAS calibration and alignment strategy


1
ATLAS calibration and alignment strategy
Richard Hawkings
ATLAS plenary meeting, 18/2/05
  • Steps in ATLAS calibration / alignment
  • Subdetector calibration questionnaire and
    responses
  • Calibration in ROD, HLT, Tier-0 and offline
  • Use of dedicated calibration streams
  • Latency and remote calibration issues
  • Offline calibration
  • Conclusions and open issues
  • Many thanks to subdetectors for useful
    discussions, and Fabiola Gianotti for
    collaboration on questionnaire and associated
    writeup

2
Calibration steps and strategies
  • Overview of calibration (alignment) evolution
  • Calibration preparation before detectors are
    installed
  • Work done in institutes and at CERN results
    mainly in subdetector databases
  • Calibration during commissioning
  • In-situ electronics calibration, cosmics, single
    beam running
  • Defines the calibration for day 1 using ATLAS
    conditions database by this point
  • Calibration from physics data
  • Calibration determined in RODs and HLT
  • Dedicated calibration step before prompt
    reconstruction
  • Continuing offline calibration procedures to
    refine constants for subsequent data reprocessing
    process will continue for months and years
    after data is taken.
  • Calibration consumers
  • Online/trigger system (need fast and reliable
    calibration), Tier-0 prompt reconstruction, later
    reconstruction passes (primarily at Tier-1
    centres)
  • What are the subdetectors plans?
  • Little systematic knowledge at ATLAS level
  • Are they compatible / consistent ?
  • Need to know for planning computing model
    (online/offline resources), DC3
    calibration/alignment closed loop test and for
    ongoing commissioning activities

3
Questionnaire and responses
  • Subdetector calibration/alignment questionnaire
    exercise in Dec 04/Jan 05
  • Main questions
  • Where in processing chain will calibrations be
    performed?
  • What are the associated CPU power requirements?
  • What dedicated calibration streams are required
    as part of event filter output?
  • Need for a prompt calibration step between
    event filter and prompt reconstruction?
  • Need to stream calibration events from event
    filter to remote institutions?
  • What are the offline processing requirements (for
    final calibrations) samples needed and access
    to RAW and ESD data?
  • How will requirements evolve between start-up and
    steady-state running?
  • All subdetectors have now responded
  • Level of detail and advancement in
    understanding/planning varies a lot
  • Active discussions going on in many subdetector
    communities
  • Summary of responses in main areas for more
    details see note at
  • http//atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pro
    ject/calib/doc/calib_req.pdf

4
Calibration at ROD level
  • ROD-level calibration performed outside physics
    and during physics runs
  • Data generally processed inside RODs (no event
    building), summary information (histograms,
    generated calibrations) passed to offline and
    database
  • Larger data readout requests for initial
    debugging
  • Several requests for between-run/daily
    calibration tasks of ? 1 hour duration
  • SCT/pixel electroncis calibration, LAr ramp runs,
    tile laser/charge injection/pedestal, RPC and CSC
    pulsers
  • Some longer tasks of several hours 1 day
    duration, expect to perform monthly
  • LAr delay runs, Tilecal cesium source calibration
  • Level 1 trigger calibration (cooperation of e.g.
    calorimeter and level 1) especially during
    initial startup with beam
  • ROD level calibration/monitoring during physics
  • Many detectors dead/noisy channels, pedestals,
    t0 monitoring, efficiencies
  • CPU requirements generally modest (subdetector
    workstations?), anticipate use of some event
    filter resources for partial event building
    outside physics

5
Calibration in the HLT
  • Calibration tasks running in HLT
  • Output of data to dedicated calibration streams,
    little CPU-intensive work
  • ID has dedicated stream of high pT tracks for
    alignment and TRT calibration
  • Processed track, hit and residual data
    specialised data format for alignment
  • LAr wants to perform (part of) Z?ee calibration
    in event filter
  • Muon system outputs dedicated stream of O(1 kHz)
    precision/trigger hits for t0, autocalibration
    and alignment
  • Hit information in very restricted road initial
    muon trackfit
  • Most promising avenue is to extract information
    from LVL2 muon trigger processing
  • Ongoing work to check implications (CPU, event
    collection) not originally foreseen
  • Level-1 trigger will use calorimeter pulsers to
    check calo/trigger gains/responses
  • Lots of monitoring will happen at this stage
    not explicitly asked about in questionnaire
  • Good place to perform generic data-quality
    monitoring on HLT-accepted events
  • Need to understand CPU requirements on event
    filter farm monitoring is a secondary task of
    HLT system

6
Calibration streams from event filter
  • Identifying a detailed list of calibration
    streams requested from event filter
  • Streams with partial readout of single/multiple
    detectors, restricted ROI
  • ID generic high-pT tracks for alignment (10-100
    Hz, specialised format)
  • LAr readout of 5-sample calorimeter RAW data in
    ROI around high-pT electrons
  • 50 Hz, possibly phase out after few months of
    running
  • High pT-muons identified at LVL1, processed
    through LVL2 (1 kHz, specialised event format)
  • High pT-muons in large/small chamber overlap
    regions for alignment (5 Hz)
  • Isolated high-pT hadrons (from single-prong ?
    trigger?) for calorimeters and TRT
  • Streams with full event readout duplicate
    events in physics stream
  • Inclusive e/? with e.g., pTgt20 GeV, dileptons and
    prescaled minimum bias
  • Duplicate and separate these events for fast
    efficient access for detector calibration experts
    especially important during first data-taking
  • Streams sum to 40-50 MB/sec, i.e. 15 of
    datataking bandwidth
  • Many events identified late on in EF processing
    have to be collected from all event filter output
    nodes data collection and bookkeeping issues

7
Processing requirements at Tier 0
  • ATLAS computing model allocates 500 kSI2k units
    (?100 dual 8GHz CPUs) to calibration activities
    at Tier-0 (13 total capacity in 2008)
  • Identified subdetector requests for Tier-0 CPU
    capacity to process special calibration streams
    in preparation for prompt reconstruction
  • SCT and pixel alignment 50 kSI2k
  • TRT alignment and calibration 20 kSI2k
  • MDT t0 and autocalibration 130 kSI2k
  • TGC alignment, calibration and efficiency
    determination 50 kSi2k
  • RPC level 1 trigger calibration 10 kSI2k
  • Estimates are very preliminary nothing from
    calorimeters yet
  • Identified requests total 260 kSI2k units (but no
    calorimeters yet), cf. computing model allocation
    of 500 kSI2k
  • Reasonable match most Tier-0 calibration
    resources will be devoted to prompt calibration
    to allow prompt reconstruction to proceed.
  • Also need to assess disk space requirements (240
    TB allocated in computing model)

8
Latency and remote calibration
  • Latency how long between end of fill and start
    of prompt reconstruction?
  • Computing model document proposed 24 hours time
    to process calibration stream, generate
    calibration and do some verification (including
    manual check)
  • NB Separate express line forseen for faster
    processing of discovery-type events, without
    waiting for calibration iteration
  • All subdetectors feel 24 hours is enough
    (providing processing power is available)
  • Also natural timescale for combining asynchronous
    calibration (e.g. optical alignment)
  • Longer latency can be expected at initial startup
  • Less-automated procedures, process samples over
    and over again
  • Will want to process bulk physics sample multiple
    times
  • Remote calibration
  • No specific requests for streaming calibration
    data to remote institutes
  • But recognition that this might be useful,
    especially if Tier-0 resources or computing
    infrastructure to access CERN are insufficient
  • Plan to do prompt calibration at CERN, but keep
    possibility open (also for monitoring)?
  • Offline calibration (after prompt reconstruction)
    will be much more geographically distributed (as
    for analysis tasks) involving the whole
    collaboration
  • Data distribution and network implications yet to
    be assessed in detail

9
Offline calibration
  • Prompt calibration aims at providing constants
    for first-pass reconstruction
  • Subsequent offline calibration steps are needed
    to refine calibration/alignment to extract
    ultimate performance of ATLAS
  • Need more statistics, studies over long time
    periods
  • Generally uses well-known physics channels,
    e.g
  • Inclusive electrons and muons (gt20 GeV pT ?)
  • Z,J/?,? decays to lepton pairs ( radiative
    photons) and W??
  • ?/Z jet and multijet, tt events,
  • Need to understand data access patterns,
    especially for RAW data
  • Now start to see first definitions of ESD and AOD
    data what can be done?
  • Can ESD contents be improved to reduce need to
    access RAW?
  • Samples can be accessed from calibration, express
    and physics streams what will be most efficient
    for long-term processing?
  • Most offline calibration activities will be based
    around Tier 1/ Tier 2 centres
  • Need to bring calibration constants back to
    central conditions DB at CERN
  • In general, expect 2-3 months between prompt and
    first re-processing

10
Conclusions and open issues
  • Very useful first exercise to understand
    subdetector calibration requirements
  • Resource needs some CPU for ROD processing, and
    substantial Tier-0 CPU
  • Little requirement for calibration CPU in HLT,
    but remember monitoring!
  • Identified issues to be followed up
  • Calibration streams require partial event
    building (by detector, by ROI) dependent on event
    type implications for TDAQ dataflow?
  • How do we collect and catalogue small calibration
    streams?
  • Writing of substantial data quantities directly
    from RODs, perhaps with several detectors
    operating together (e.g. LVL1calorimeters)
  • Requests to write more data at startup (e.g. LAr
    5 samples) bandwidth vs trigger rate
    considerations
  • Calibration stream selections need detailed
    studies
  • Thresholds, rates and purities with as built
    detector calibration
  • Single isolated hadron sample needs particular
    study
  • Fast efficient calibration will be key at ATLAS
    startup
  • Start to exercise calibration plans in
    commissioning and DC3
Write a Comment
User Comments (0)
About PowerShow.com