A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating o - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating o

Description:

Department of Mechanical Engineering and Institute for Systems Research. University of Maryland ... Engineer B is designing a component that is very sensitive ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 27
Provided by: jasonaughe
Category:

less

Transcript and Presenter's Notes

Title: A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating o


1
A Comparison of Information Management using
Imprecise Probabilities and Precise Bayesian
Updating of Reliability Estimates
  • Jason Matthew Aughenbaugh, Ph.D.
  • jason_at_arlut.utexas.edu
  • Applied Research Laboratories
  • University of Texas at Austin
  • Jeffrey W. Herrmann, Ph.D.
  • jwh2_at_umd.edu
  • Department of Mechanical Engineering and
    Institute for Systems Research
  • University of Maryland
  • Third International Workshop on Reliable
    Engineering Computing, NSF Workshop on Imprecise
    Probability in Engineering Analysis Design,
    Savannah, Georgia, February 20-22, 2008.

2
Motivation
  • Need to estimate reliability of system with
    components of uncertain reliability.
  • Which components should we test to reduce
    uncertainty about system reliability?

3
Introduction
Existing information
Prior characterization
Is it relevant?
Data
Is it accurate?
Statistical modeling and updating approach
New experiments
Updated / posterior characterization
4
Statistical Approaches
  • Compare the following approaches
  • (Precise) Bayesian
  • Robust Bayesian
  • sensitivity analysis of prior
  • Imprecise probabilities
  • actual true probability is imprecise
  • the imprecise beta model


Different philosophical motivations, but
equivalent math. for this problem
5
Is precise probability sufficient?
  • Problem equiprobable
  • Know nothing or know they are equally likely?
  • Why does it matter?
  • Engineer A states that input values 1 and 2 have
    equal probabilities
  • Engineer B is designing a component that is very
    sensitive to this input
  • Should Engineer B proceed with a costly but
    versatile design, or study the problem further?
  • Case 1 Engineer A had no idea, so stated equal.
    Study good
  • Case 2 Engineer A performed substantial
    analysis. Additional study wasteful.

6
Moving beyond precise probability
  • Start with well established principles and
    mathematics
  • Conclude it is insufficient
  • Abandon probability completely?
  • Relax conditions, extend applicability?

Think sensitivity analysis. How much do
deviations from a precise prior matter?
7
Robust Bayes, Imprecise Beta Model
  • Instead of one prior, consider many (a set)

Cumulative Probability
?
8
Problem Description
  • A simple parallel-series system, some info
  • Assume we can test 12 more components
  • How should these tests be allocated?
  • A single test plan can have different outcomes
  • Compare different scenarios of existing
    information

9
Multiple Outcomes of Experiement
  • Precise probability
  • Consider one outcome test A 12 times, 2 fail
  • Get one new posterior precise parameters
  • Consider all possible outcomes test A, get
  • Get a new posterior for each possible outcome
    sets of parameters
  • Imprecise probability
  • One outcome, one SET of posteriors
  • Multiple outcomes, SET of SETS of posteriors

How measure uncertainty? How make comparisons
and decisions?
10
Metrics of Uncertainty Precise Distributions
  • Variance-based sensitivity analysis (SVi)
  • (Sobol, 1993 Chan et al., 2000)
  • variance of the conditional expectation / total
    variance
  • focuses on status quo, next (local) piece of info
  • testing a component with a large sensitivity
    analysis should reduce variance of system
    reliability estimate
  • Mean and variance observations
  • Posterior variance

11
Metrics of Uncertainty Imprecise Distributions
  • Imprecise variance-based sensitivity analysis
    (Hall, 2006)
  • Does not worry about outcomes local metric
  • Mean and variance dispersion
  • Imprecision in the mean
  • Imprecision in the variance

12
Scenarios with Precise Distributions
Scenario 1 priors
  • Components have beta distributions for the prior
    distributions of failure probability
  • Scenario 1
  • System failure probabilitymean 0.2201
    variance 0.0203
  • Scenario 2
  • System failure probability mean
    0.1691variance 0.0116

X
X
Scenario 2 priors
13
Scenario 1 Results
  • Variance-based sensitivity analysis
  • Posterior variance

Best worst-case
Best best-case
14
Scenario 1 Results
2
1
15
Scenario 2 Results
  • Variance-based sensitivity analysis
  • Posterior variance

Best worst-case
Best best-case
16
Scenario 2 Results
2
1
17
Scenario 3 Imprecise Distributions
  • Component failure probabilities are modeled using
    imprecise beta distributions
  • System failure probability an imprecise
    distribution
  • Mean 0.2201 to 0.4640
  • Variance 0.0136 to 0.0332
  • Imprecise variance-based sensitivity analysis

Since failure probability of B is poorly known,
we allow for a range. Scenario 3 comparable to
precise scenario 1.
18
Posterior Variance Analysis
Smallest variances, and smallest imprecision in
variances.
19
Results for Scenario 3
Sample results12, 0, 0, 0, 12, 0, 6, 6, 0
Convex hull of results 12, 0, 0, 0, 12, 0,
6, 6, 0
Convex hull of results0, 0, 12, 6, 0, 6,
0, 6, 6
Convex hull of results0, 12, 0, 4, 4, 4,
6, 6, 0, 0, 6, 6
20
Scenario 4 Imprecise Distributions
  • Component failure probabilities are modeled using
    imprecise beta distributions
  • System failure probability is also an imprecise
    distribution
  • Mean 0.1691 to 0.2880
  • Variance 0.0100 to 0.0173
  • Imprecise variance-based sensitivity analysis

Compared to scenario 3, the failure probability
of C is reduced. This makes it comparable to
precise scenario 2.
21
Results for Scenario 4
Convex hull of results0, 0, 12, 6, 0, 6,
0, 6, 6
Convex hull of results12, 0, 0, 0, 12, 0,
6, 6, 0
Convex hull of results12, 0, 0, 4, 4, 4,
0, 6, 6
22
Discussion / Future Work
  • Multiple sources of uncertainty
  • Existing knowledge
  • Results of future tests
  • How do we prioritize different aspects?
  • Variance or imprecision reduction?
  • Best case, worst case, average case of results?
  • Incorporate economic/utility metrics?
  • Other imprecision/total uncertainty measures?
  • Breadth of p-boxes (Ferson and Tucker, 2006 )
  • Aggregate uncertainty, others(Klir and Smith,
    2001)

23
Summary
  • Shown how to use different statistical approaches
    for evaluating experimental test plans
  • Used direct uncertainty metrics
  • Variance-based sensitivity analysis
  • Precise and imprecise
  • Posterior variance
  • Dispersion of the mean and variance
  • Imprecision in the mean and variance

24
Thank you for your attention.
  • Questions? Comments? Discussion?

This work supported in part by the Applied
Research Laboratories at UT-Austin Internal IRD
grant 07-09
25
SVi
26
Formulae


. The mathematical model for the reliability of
the system shown in Figure 1 follows.




Write a Comment
User Comments (0)
About PowerShow.com