Identification, Measurement and Decision in Analytical Chemistry - PowerPoint PPT Presentation

About This Presentation
Title:

Identification, Measurement and Decision in Analytical Chemistry

Description:

Identification, Measurement and Decision in Analytical Chemistry Steve Ellison LGC, England Introduction What is Identification? Why does it matter? – PowerPoint PPT presentation

Number of Views:328
Avg rating:3.0/5.0
Slides: 28
Provided by: slre
Category:

less

Transcript and Presenter's Notes

Title: Identification, Measurement and Decision in Analytical Chemistry


1
Identification, Measurement and Decision in
Analytical Chemistry
  • Steve EllisonLGC, England

2
Introduction
  • What is Identification?
  • Why does it matter?
  • Where does measurement fit in?
  • Quality in identification
  • How sure are you?
  • characterising uncertainty and method performance

3
What is identification?
  • Classification according to specific criteria
  • Above or Below a limit
  • Within Spec.
  • Red
  • Classification into ranges (lt2 2-5 5-10 gt10)
  • Molecular species by NMR, IR, MS..
  • Material or ingredient (Rubber, Fat)
  • Origin or authenticity

4
Why does it matter?
  • Classification contributes to decisions
  • Decisions cost money
  • Incorrect batch rejection incurs reprocessing
    costs
  • Incorrect acceptance risks litigation and loses
    business
  • False positives may generate spurious
    prosecutions
  • Costs are directly related to false
    classification probabilities
  • Know probabilities - optimise cost

5
Where does measurement fit in?
  • Measurement contributes to most identifications
  • Comparison with limits
  • Consistency of values (wavelength, mass, sequence
    length)
  • But not all
  • Relative Pattern identification (?)
  • Colour matching by eye
  • Identity parades.

6
Interpretation Against Limits
Measurement result
Limit
(a)
(b)
(c)
(d)
(e)
(f)
(g)
7
Controlling Identification
  • Good practice guidance
  • Stated criteria
  • Trained staff
  • Controlled and calibrated instruments
  • Traceability!
  • Validated methods
  • .. etc

8
How sure are you?
  • Does Measurement Uncertainty apply?
  • If not, what does?

9
Does Measurement Uncertainty Apply?
  • NO
  • at least, not for the classification result

10
Uncertainty and classification
Uncertainty in the measurement result contributes
to uncertainty about classification
11
Uncertainty and classification
  • Uncertainty in the measurement result contributes
    to uncertainty about classification
  • Uncertainties in test conditions lead to
    uncertainty in classification
  • Uncertainties should be controlled to have little
    effect on the test result

12
Characterising uncertainty in identification
  • False response rates
  • What is a false response rate?
  • How is it determined?
  • Alternative expressions of method performance or
    uncertainty

13
False response rates
Observed
Actual
14
False response rates
Observed
Actual
15
False negative rates
  • Fraction of observed negatives which are false
  • Fraction of true positives reading negative
  • Fraction of all results which are incorrectly
    read as negative

16
False negative rates
  • Fraction of observed negatives which are false
  • Fraction of true positives reading negative
  • Fraction of all results which are incorrectly
    read as negative

AOAC Definition(clinical)
17
False negative rates
  • Fraction of observed negatives which are false
  • Fraction of true positives reading negative
  • Fraction of all results which are incorrectly
    read as negative

AOAC Definition(clinical)
The one that affects costs directly
18
False response rates Example
Observed
Actual
Nortestosterone in urine screening method
validation data.Actual rates from confirmatory
test
19
False negative rates Example
  • Fraction of observed negatives which are false
  • 2/17 11
  • Fraction of true positives reading negative
  • 2/16 13
  • Fraction of all results which are incorrectly
    read as negative
  • 2/32 6

20
False response rates Example
Observed
Actual
EMIT test for cocaine in urine Ferrara et al,
J. Anal. Toxicol., 1994, 18, 278
21
False negative rates Example
  • Fraction of observed negatives which are false
  • 7/429 1.6
  • Fraction of true positives reading negative
  • 7/126 5.6
  • Fraction of all results which are incorrectly
    read as negative
  • 7/522 1.3

22
False response rates - how much data?
  • Observed 7/126 (5.6)
  • 95 confidence interval (binomial)
  • 1.6 to 9.5
  • 95 CI proportional to 1/?nobs
  • needs a LOT of false responses for precise
    figures
  • but false responses are rare for good methods.
  • Most useful direct studies are worst case or
    near 50 false response levels

23
False responses Estimation from thresholds
24
False responses From probabilities
  • Spectroscopic identification study
  • S.L.R. Ellison, S.L. Gregory, Anal. Chim. Acta.,
    1998 370 181.
  • Calculated chance FT-IR match probabilities
  • probabilities based on match-binning - hits
    within set distance
  • required hypergeometric distribution (n matches
    of m taken from population)
  • Compared with actual hits on IR database

25
False responses From probabilities
  • Theoretical predictions very sensitive to
    probability assumptions
  • 10 changes in p make large differences in
    predictions
  • Best performance within factor of 3-10
  • (Improved over binomial probabilities by gt106)
  • Probability information must be excellent for
    good predictions

26
False response rates from databases
  • Most spectral databases contain 1 of each
    material
  • most populations do not!
  • Population data must account for sub-populations
  • cf. DNA profiles for racially inhomogeneous
    populations

27
Alternative performance indicators
28
Conclusions
  • Classification needs control to save money
  • INPUT uncertainties need control
  • False response rates and derived measures are
    useful performance indicators
  • Definitions vary and make a (big) difference
  • Sufficient data are hard to get except for
    carefully chosen analyte levels
  • Databases suspect unless built for the purpose
  • Theoretical predictions usable with great care
  • Unwise to expect precise numbers!

29
Best practice
  • Consider costs of false responses
  • Control qualitative test conditions via traceable
    calibration of equipment
  • Check most critical false response rate
  • preferably both
  • Use worst-case and likely interferent studies
    to show limits of method performance
  • Use APPROPRIATE population data
  • Report with caution
  • particularly on probability estimates

30
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com