Diapositiva 1 - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Diapositiva 1

Description:

It performs a suite of tests (described later) on some or all chips of a plaquette (it does ... without redoing parts of the test which have already been done. ... – PowerPoint PPT presentation

Number of Views:33
Avg rating:3.0/5.0
Slides: 33
Provided by: DarioM7
Category:

less

Transcript and Presenter's Notes

Title: Diapositiva 1


1
Plaquette testing with Renaissance
The PSI Chip Test-Stand based on PCI
cards, running on Linux platforms
Dario Menasce INFN Milano Marcos
Turqueti Lorenzo Uplegger Fermilab
2
What is the Renaissance ? (apart from the
obvious)
  • The Renaissance is a Linux-PCI based test-stand
    tool to characterize plaquettes
  • It performs a suite of tests (described later)
    on some or all chips of a plaquette (it does
  • this in parallel) and saves data in the
    construction DB (this is not implemented yet)
  • it is fully interactive
  • tests can be interrupted at any time and
    restarted later (no need to restart from scratch)
  • fits to S Curves and some preliminary analysis
    to assess the quality of the plaquette
  • under test are performed on-line (no need to
    process intermediate data files off-line)
  • provision is being put in place to allow the
    tool to directly talk to a database
  • - in read-mode to provide users with a list of
    plaquettes still to be tested to choose
  • from
  • - in write-mode to allow results to be stored
    away in the construction DB
  • it is based on cheap PCI hardware and Open
    Source libraries (allows to have multiple
  • stations setup at low cost/effort, more on
    this later)

3
Hardware Software components
  • The system is based on
  • custom made PCI cards with an on-board
    programmable FPGA and two memory
  • banks (PTA cards). These cards communicate to
    the PC.
  • A programmable mezzanine card (PMC, also
    featuring an FPGA) allows the
  • system to talk to the PSI using the chip
    custom protocol.
  • A Token Bit Manager and a custom ADC (Maxim)

A programming of the PTA/PMC FPGAs is all which
is needed to read-out the PSI chip
4
Hardware Software components
  • The system, originally developed for Windows, has
    now been ported to Linux.
  • All code is Open-Source (no license except for
    the driver, but we are working on
  • a custom built solution for this too, as we
    did for the test-beam DAQ)
  • The source code is in a CVS repository (easy to
    update for new versions as they
  • become available over time).
  • All which is needed to setup software on a
    station is
  • gt cvs co renaissance (fetch from the
    repository)
  • gt source setup.csh (setup environment)
  • gt make (compile)
  • gt run
  • Requires Qt, root and qtRoot libraries plus a
    PCI driver (currently WinDriver )
  • Support libraries need to be installed only once,
    but this usually requires an expert.

5
What tests are performed?
We didnt want to reinvent the wheel, so we based
our suite on what people already do using the
excellent COSMO tool (we actually took most of
our inspiration from this), and we specifically
implemented those tests outlined by P. Trüb in
his talk
(http//agenda.cern.ch/askArchive.php?baseagenda
catega056463ida056463s15t1/transparencies)
In the next few pages I will discuss the suite of
tests that can be performed with the Renaissance,
but let me first highlight a few background
points necessary to understand the environment
in which those tests are performed.
6
A brief preliminary introduction
  • A production test-stand must not only be able to
    probe and test plaquettes, but has
  • to provide an environment that allows operators
    to do this in an efficient way, keeping
  • track of what has been done, when, by who and on
    which plaquettes.
  • In other words, it must operate as a
    self-consistent environment that helps (guides)
  • the operator through the delicate, often boring,
    task of characterization
  • should provide tools to minimize human errors
    (eg. selection of a wrong plaquette ID or an
    already tested
  • one, remind the operator to save intermediate
    and final results in the DB etc)
  • should be easy to use for repetitive tasks (at
    most a couple of button-presses per test), but
    should provide
  • backdoors to probe the plaquette in details in
    case the need arises (full probing capabilities,
    a-la-COSMO)
  • since tests are CPU intensive, it should allow
    to save intermediate results to disk and then
    restart from those
  • without redoing parts of the test which have
    already been done.
  • should also perform fits online (where
    required, eg. S Curves) without resorting to
    offline processes.
  • This allows for immediate feedback and
    shortens test latency time.
  • Should also provide means to store final
    results in a DB (how to do this is still under a
    lot of discussion and

7
The test suite address analog levels
Upon start the Renaissance performs an
automatic calibration of the analog levels and
produces a set of histograms, one per ROC, for
the operator to inspect. This is not really a
test, but rather a prerequisite for all
subsequent tests (pending this, all addressing
would be wrong and the test meaningless). This
calibration takes of the order of 2 seconds.
8
Threshold timing
  • This is the first test this test is crucial to
    determine the optimum value of charge injection
    delay for each
  • given threshold setting. 10 cells are chosen
    on the diagonal of the ROC and injected a 100
    times for each
  • pair of thr/calDelay values. The optimal value
    is located along the black line in the point
    marked by the
  • blue circle.
  • The black line is a polynomial parametrization
    of the mean value of the plateau
  • This polynomial is computed for each chip of
    the plaquette, and the optimal working point are
    used in
  • all subsequent tests.
  • This test takes about 30 seconds (a small
    overhead is incurred when testing several chips
    in parallel)

9
Threshold timing
10
Optimized threshold region for S curves
  • At a certain point we will fit s-curves an
    s-curve is obtained by varying the charge
    injection VCal for a fixed
  • threshold. This is done for each cell of each
    ROC.
  • to obtain a good fit we would like to scan the
    interesting
  • region with fine granularity
  • but the region is huge (256 bins) and this is
    done on a
  • per-cell basis (time-consuming)
  • we need a pre-estimate of the interesting region
    for
  • each chip so to limit the scan in the overall
    plaquette
  • to such a range

This is done by scanning 10 cells on the diagonal
for any given VThr and VCal (100 injections).
We then compute the limits of VCal for that
chip at the location of VThr that was used in the
previous test to optimize the VCalDelay The
final range for the whole plaquette is than the
overlap of the single chip ranges boosted by
30 This test takes about 1 minute.
11
(No Transcript)
12
Pixel readout
This test takes about 10 seconds
13
Mask bit
This test takes about 10 seconds
14
Bump bonds
Set wrong values of VCalDelay on purpose just
as consistency checks
This test takes about 10 seconds
15
S-Curve
This test takes of the order of 8 for a 2x5
plaquette (40000 s-curves)
This test is a crucial one it assess noise,
threshold dispersion and other parameters that
provide an overall estimator of the goodness of a
ROC (and therefore of a plaquette). Since all
those quantities are derived from a fit, it is
crucial that the fit strategy is highly robust.
It happens, with non zero frequency, that an s
curve is perfectly acceptable, but the fit is
unable to converge correctlythreshold value and
noise are then wrongly computed and this makes
the ROC incorrectly appear to be as improperly
working. This is unacceptable, so we did our
best to provide estimator of goodness of fit to
detect those cases. As a general statement each
result derived from a fit must be provided with
associated goodness of fit estimators (like ?2,
CL and convergence status returned by Minuit).
Lets see what that means in practice
16
Example of a wrong fit
Displayed cell
17
Global ?2 distributions (DOF normalized)
18
Global CL distributions
19
Global fit convergence status
0 - Fit not converged
1 - Approximation only, not accurate
2 - Full matrix, but forced positive-definite
3 - Full accurate covariance matrix
20
Global noise distributions
21
Plaquette global results
22
Chip summary results
23
Threshold dispersion
24
Trimming
25
Internal calibration
26
Gain and pedestal determination
27
Full probing capabilities
28
How is I/O handled? (Input configuration and
output results)
  • The Renaissance is able to perform tests for an
    entire plaquette by processing all ROCs in
    parallel.
  • This required us to carefully consider how to
    manage the I/O.
  • There is a number of preliminary considerations
    we took into account
  • The program should provide consistency checks
    to guarantee that the plaquette under test is
    exactly
  • the one intended by the user. This requires
    the program to provide the user with a list of
    plaquettes from
  • which to choose, list that must indicate which
    has already been processed and which requires
    further
  • attention (input from DB, more on this later).
  • It should be possible to interrupt a
    characterization test at any time and resume
    later without having to
  • duplicate those tests in the suite that were
    already performed. Solution we adopted is class
    persistency.
  • All data used to perform a test as well as
    each and every histogram are made persistent to
    disk in a
  • format suitable for retrieval (root I/O). The
    configuration parameter space is saved along as
    well.
  • - the added benefit is that off-line programs
    can analyze batches of those root files for data
    analysis.
  • - this allows also reproducibility of a test
    (parameter space of configuration values are
    recorded)
  • - final characterization values stored in a
    database have an associated link to this

29
Some DB considerations
  • This test-stand does not live in isolation in
    space it needs input from previous wafer tests
    and must
  • provide results for subsequent test (after
    mounting on a panel for e.g.)
  • In general it requires (interactive) I/O from a
    DB.
  • Input
  • what useful information can we gather from
    wafer testing already done on chips?
  • when a user starts testing a plaquette he/she
    must specify an ID for it we would like to
    provide him/her
  • with a list of all plaquettes which still need
    test or eventually need to be re-tested.

30
Some DB considerations
  • This requires interactive connection with a DB.
    Discussion with DB experts is under way to enable
    us do this.
  • Output
  • Tables have already been designed in the DB to
    contain test results we are currently debating
    the best
  • way to feed the results to those tables
  • the customary approach is by writing out an XML
    file and have that be processed by another
    program to
  • feed the DB.
  • - this could be fine for us, but we need to
    write an XML output and activate the DB feeder
    procedure
  • automatically when the test ends.
  • another possibility is direct connection to the
    DB when the test is done and the operator is
    satisfied with
  • the results, by pushing a button a remote
    connection is established with the DB and data
    are sent.
  • - while it is easy to perform remote
    transactions with a DB in C, it may turn out to
    be difficult to
  • write the appropriate SQL statements for
    our DB
  • We are checking with experts an still have to
    take a decision.

31
Things done
  • As of yesterday V1.0 has been successfully
    deployed at SiDet and a demo
  • has been done to Alan, Sudhir, Mike and Eric.
  • Except DB access, everything shown works
  • Known bugs will be shortly fixed
  • Unknown bugs lurk somewhere to be discovered.
    Need input from users!
  • The code is in CVS (for historical and
    practical reason in the old BTeV one
  • but will be moved to the CMS one once we know
    were the appropriate location is)

32
To do
  • A lot.
  • Clarify DB issues. This has important impact on
    the test-stand code.
  • Start training people
  • Verify results and make comparisons with COSMO
    (still our reference for validation)
  • Get much needed help from experts to assess
    whether the test suite we propose
  • is all thats needed, and if not what else (or
    what can be done better)
  • Start thinking for a scheduled activity this is
    an important use-case to improve the
  • workflow of production using the test-stand
  • Any other thing we havent thought yet (I
    suspect there are a lot of them)
Write a Comment
User Comments (0)
About PowerShow.com