AMBIGUITY - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

AMBIGUITY

Description:

Item weight decision maker ... Best to have substantive and normative elicitators ... experts is seen in the robustness of our answers against loss of experts. ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 36
Provided by: rog653
Category:
Tags: ambiguity

less

Transcript and Presenter's Notes

Title: AMBIGUITY


1
UNCERTAINTY
4 How to do an Expert Judgment Study
AMBIGUITY
Roger Cooke Resources for the Future Dept. Math,
Delft Univ. of Technology April 15,16 2008
INDECISION
2
Procedures GuideEUR_18820_ProcGuide.pdf,
  • TABLE OF CONTENTS
  • PART I Generic Issues
  • 1. What is Uncertainty
  • 2. When and how Should Uncertainty Analysis be
    Performed?
  • 3. Structured Expert Judgment
  • 4. Performance Measures
  • 5. Combinations of Expert Judgments
  • 6. Dependence
  • PART II Procedures
  • 1. Introduction
  • 2. Preparation for Elicitation
  • 3. Elicitation
  • 4. Post-Elicitation
  • APPENDIX I Summary Results of the EC-USNRC
    Uncertainty Study
  • APPENDIX II ec/usnrc project reports
  • APPENDIX III Glossary of terms for Uncertainty
    Analysis
  • APPENDIX V Training material
  • REFERENCES

3
Calibration
  • For each variable Xi, i 1..n
  • Assess __ai___ __bi___ __ci___
  • 5 50 95
  • Expert believes
  • p1 Prob(Xi ? ai) 0.05, p2 Prob(ailtXi ?bi
    0.45), etc
  • Let x1xn be realizations of X1Xn
  • s1 i xi ? ai / n, s2 i ailtXi ?bi
    / n, etc
  • Then 2n ?i1..4 si ln (si / pi) Chi
    square, 3df.

4
Why not Triangular?

5
Calibration Score
6
Information score
  • For item i, expert e, fit density fe,i(x) to
    background measure, ?(x), complying with experts
    quantiles, minimizing information wrt background

7
  • Compute relative information wrt background
  • I(e,i) I(fe,i(x) ?(x) ) ?j1..4 pj ln(pj /
    mj)
  • mj is background measure of interquantile
    interval j, for item i.
  • Inf score(e)
  • average information (1/items) ?i I(e,i)

8
Combined score
Significance Level
  • Cal(e) ? Inf(e) ? 1Cal(e) ? ?
  • This score is an asymptotically
  • strictly proper scoring rule, ie
  • Expert maximizes long run expected score by, and
    only by, stating percentiles which (s)he believes
  • EJshortcourse\sheets\EJCoursenotes-ScoringRules.do
    c

1 if calibration ? ?, else 0
9
Combining Experts
  • fe,i expert e's density for variable i.
  • Equal weight decision maker
  • feq(i) (1/E) ?e1..E fe,i
  • Performance Based Combinations
  • Global weight decision maker
  • proportional to experts combined score, (with
    optimization).
  • fgw(i) ?e1..E we fe,i ?e1..E we 1.
  • Item weight decision maker
  • product of calibration and information for each
    item (with optimization).
  • fiw(i) ?e1..E we,i fe,i ?e1..E we,i 1.

Weight depends on expert, not item
Weight depends on Expert and item
10
Optimization
  • significance level ? is chosen to optimize
    combined score of DM
  • f?(i) ?e1E fe,i ? Cal(e) ? Inf(e) ?
    1?Cal(e)
  • For each ?, compute calibration ? information
    choose ? for which this is maximum.

11
Dependence
12
Procedures
  • Pre-Elicitation
  • (1) Definition of case structure
  • (2) Identification of target variables
  • (3) Identification of query variables
  • (4) Identification of performance/seed/calibration
    variables
  • (5) Identification of experts
  • (6) Selection of experts
  • (7) Definition of elicitation format document
  • (8) Dry run exercise
  • (9) Expert training session
  • Elicitation
  • (10) Expert elicitation session
  • Post-Elicitation
  • (11) Combination of expert assessments
  • (12) Discrepancy and robustness analysis
  • (13) Feed back
  • (14) Post-processing analyses
  • (15) Documentation

13
Definition of case structure
  • Which variables are uncertain
  • Can the uncertainty be quantified by historical
    and/or measurement data?
  • Which (hypothetical) measurements would be used
    to quantify the parameters?

14
Identification of target variables
  • The values of the parameters are uncertain.
  • The uncertainty cannot be quantified with
    historical and/or measurement data.
  • The uncertainty is expected to have a significant
    impact on the uncertainty of one or more
    endpoints of the model.

15
Identify query variables
  • Ask for values of observable or potentially
    observable quantities.
  • Formulate questions in a manner consistent with
    the way in which an expert represents the
    relevant information in his knowledge base.

16
Query vbls ? Target vbls?
17
Elicitation format
  • Conditional on
  • lt values of factors in the case structure
    assumptions gt
  • Please give the 5, 50 and 95 quantiles of your
    uncertainty in
  • lt Hypothetical experiment gt
  • taking into account that values of
  • lt uncertainty set gt
  • are unknown.

18
Choosing Seed Variables

Do NOT use almanac questions!
19
Practical issues
  • The seed variables should sufficiently cover the
    case structures for elicitation. Particularly,
    when one expert panel should tackle different sub
    fields, seed variables must be provided for all
    sub fields.
  • For each panel at least 10 seed variables are
    needed, preferably more.
  • Seed variables may be, but need not be identified
    as such in the elicitation.
  • If possible, the analyst should be unaware of the
    values of the seed variables during the
    elicitation.

20
Identify expert pool
  • ROUND ROBIN METHOD
  • names of potential experts are generated within
    the organization These persons are approached and
    asked
  • what is your background and knowledge base with
    regard to the subject?
  • which other persons are knowledgeable with regard
    to the subject?
  • The persons named in the first round are
    approached with the same two questions.
  • Step 2 is iterated until (a) no new names appear,
    of (b) it is judged that a sufficiently diverse
    set of experts is obtained.

21
Select ExpertsAt LEAST 4, preferably 6-10
  • Decide and tell them
  • Type of assessment task
  • Remuneration
  • Distribution of study results
  • Use of the experts name
  • Feedback of expert judgment data

22
Regarding names
  • Expert names and affiliations published in the
    study.
  • All information, including expert names and
    assessments, is available for competent peer
    review, but is NOT for unrestricted distribution.
  • Individual assessments and scoring available for
    unrestricted distribution, identified as expert
    1, 2,3, etc.
  • Expert rationales, by expert available for
    unrestricted distribution.
  • Expert receives feedback on his/her own
    performance
  • Further published use of the experts name
    requires the experts approval.

23
Preparation of Elicitation Protocol
  • ElicitationProtocol_PM2.5.doc
  • ElicitationProtocol_INVASIVE_SPECIES.doc
  • NUREGCR-6545-Earlyhealth-VOL2.pdf
  • Aspinall Briefing Notes.pdf

24
Dry Run
  • ALWAYS do a dry run
  • is the case structure document clear
  • are the questions clearly formulated
  • is the additional information provided with each
    question appreciated
  • is the time required to complete the elicitation
    too long or too short.

25
Expert training
  • Varies according to need and budget
  • 30 min intro for each expert
  • Half day group meeting to discuss case structure
    and method
  • Two day meeting
  • Case structure
  • Assessment training
  • Format for experts written rationales

EUR_18820_ProcGuide.pdf appendix V
26
Elicitation
  • Best to have substantive and normative
    elicitators
  • Normative asks questions, probes for reasons
  • Substantive captures reasoning, answers
    substantive questions
  • Do NOT use remote or electronic elicitation
  • Do NOT exceed 4 hrs.

27
Combining experts judgments
  • ..\EJ-Programs\Excalibur.exe

28
Discrepancy
Run EXCALIBUR with eq. weights and
Discrepancy Shows how much the experts differ
from the average expert
29
Robustness
Run Robustness (items) and (experts) to see how
loss of item or expert would affect results is
the mean difference wrt original DM smaller than
the differences between experts themselves?
30
Feedback
  • The experts must have access to
  • their assessments
  • calibration and information scores
  • weighing factors
  • passages in which their name is used.
  • Conclusions wrt over- or underconfidence
  • Conclusions wrt tendency to over- or
    underestimate.

31
Post-processingprobabilistic inversion
Generic_Prob_Inversion.pdf

32
Write upEJCoursenotes-ClassicalModel-Boilerplate.
doc
  • Introduction
  • what is purpose
  • why EJ,
  • content of this report
  • Background and Methods
  • Experts
  • Variables of interest
  • Seed variables
  • Performance measures and combination
  • Calibration
  • Information
  • Combination
  • Results
  • Tables and graphs
  • Discussion
  • Conclusions / Recommendations

33
FAQs(1)
  • From an expert I don't know that
  • Response No one knows, if someone knew we would
    not need to do an expert judgment exercise. We
    are tying to capture your uncertainty about this
    variable. If you are very uncertain then you
    should choose very wide confidence bounds.
  • From an expert I can't assess that unless you
    give me more information.
  • Response The information given corresponds with
    the assumptions of the study. We are trying to
    get your uncertainty conditional on the
    assumptions of the study. If you prefer to think
    of uncertainty conditional on other factors, then
    you must try to unconditionalize and fold the
    uncertainty over these other factors into your
    assessment.
  • From an expert I am not the best expert for
    that.
  • Response We don't know who are the best experts.
    Sometimes the people with the most detailed
    knowledge are not the best at quantifying their
    uncertainty.
  • From an expert Does that answer look OK?
  • Response You are the expert, not me.
  • From the problem owner So you are going to score
    these experts like school children?
  • Response If this is not a serious matter for
    you, then forget it. If it is serious, then we
    must take the quantification of uncertainty
    seriously. Without scoring we can never validate
    our experts or the combination of their
    assessments.

34
FAQs(2)
  • From the problem owner The experts will never
    stand for it.
  • Response We've done it many times, the experts
    actually like it.
  • From the problem owner Expert number 4 gave
    crazy assessments, who was that guy?
  • Response You are paying for the study, you own
    the data, and if you really want to know I will
    tell you. But you don't need to know, and knowing
    will not make things easier for you. Reflect
    first whether you really want to know this.
  • From the problem owner How can I give an expert
    weight zero?
  • Response Zero weight does not mean zero value.
    It simply means that this expert's knowledge was
    already contributed by other experts and adding
    this expert would only add a bit of noise. The
    value of unweighted experts is seen in the
    robustness of our answers against loss of
    experts. Everyone understands this when it is
    properly explained.
  • From the problem owner How can I give weight one
    to a single expert?
  • Response By giving all the others weight zero,
    see previous response.
  • From the problem owner I prefer to use the equal
    weight combination.
  • Response So long as the calibration of the equal
    weight combination is acceptable, there is no
    scientific objection to doing this. Our job as
    analyst is to indicate the best combination,
    according to the performance criteria, and to say
    what other combinations are scientifically
    acceptable.

35
Lets have another break
Write a Comment
User Comments (0)
About PowerShow.com