Bayesian%20Statistics%20:Applied%20to%20Reliability:%20Part%201%20Rev.%201 - PowerPoint PPT Presentation

View by Category
About This Presentation
Title:

Bayesian%20Statistics%20:Applied%20to%20Reliability:%20Part%201%20Rev.%201

Description:

What is Bayesian Statistics? It is the application of a particular probability rule, or theorem, for understanding the variability of random variables i.e. statistics. – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 71
Provided by: statis2
Learn more at: http://statisticaldesignmethods.com
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Bayesian%20Statistics%20:Applied%20to%20Reliability:%20Part%201%20Rev.%201


1
Bayesian Statistics Applied to Reliability Part
1 Rev. 1
  • Allan Mense, Ph.D., PE, CREPrincipal Engineering
    Fellow
  • Raytheon Missile Systems
  • Tucson, AZ

2
What is Bayesian Statistics?
  • It is the application of a particular probability
    rule, or theorem, for understanding the
    variability of random variables i.e. statistics.
  • The theorem of interest is called Bayes Theorem
    and will be discussed in detail in this
    presentation.
  • Bayes Theorem has applicability to all
    statistical analyses not just reliability.

3
Blame It All on This Man
References.The two major texts in this area are
Bayesian Reliability Analysis, by Martz
Waller 2 and more recently Bayesian
Reliability, by Hamada, Wilson, Reese and Martz
1. It is worth noting that much of this early
work was done at Los Alamos National Lab on
missile reliability and all the above authors
work or have worked at the LANL 1. There are
also chapters in traditional reliability texts
e.g. Statistical Methods for Reliability Data,
Chapter 14, by Meeker and Escobar
Rev. Sir Thomas Bayes (born London 1701, died
1761) had his works that include the Theorem
named after him read into the British Royal
Society proceedings (posthumously) by a colleague
in 1763.
4
Bayesian Helps
Bayesian methods have only recently caught on due
to the increased computational speed of modern
computers. It was seldom taught to engineers
because there were few ways to complete
calculations for any realistic systems.
5
Background Probability
  • To perform statistics one must understand some
    basic probability
  • Multiplication rule Two events A,B the
    probability of both events occurring is given by
  • P(A and B)P(AB)P(B) P(B and
    A)P(BA)P(A)P(A?B)
  • There is no implication of time series of these
    events
  • PAB) is called a conditional probability
  • P(A and B) P(A)P(B) if A and B are independent,
  • P(A and B)0 if A and B are mutually exclusive
  • Example Have 3 subsystems in series the
    probability the systems does not fail by time T
    is RSYSR1(T)R2(T)R3(T) if failures are not
    correlated

These rules also work for probability
distributions
6
Background
  • To perform statistics one must understand some
    basic probability
  • Addition rule Two events A,B the probability of
    either event A or B or both occurring is given by
  • P(A or B)P(A)P(B) P(A and B) P(A?B)
  • There is no implication of time series of these
    events
  • P(A or B)P(A)P(B) P(A)P(B) if A and B are
    independent,
  • P(A or B)P(A)P(B) if A and B are mutually
    exclusive
  • Example Have 2 subsystems in parallel the
    probability the systems does not fail by time T
    is RSYSR1(T)R2(T)-R1(T)R2(T) if failures are
    not correlated

These rules also work for probability
distributions
7
Background
  • To perform statistics one must understand some
    basic probability
  • Bayes Rule Two events A,B the probability of
    event A occurring given event B occurs is given
    by
  • P(A B)P(BA) P(A) / P(B)
  • There is no implication of time series of these
    events.
  • P(AB) posterior probability.
  • P(A) prior probability.
  • P(B)P(BA)P(A)P(BA) P(A), called the
    marginal probability of event B also called the
    Rule of Total Probability.

These rules also work for probability
distributions
8
Problem 1
  • Aevent that missile thruster 1 will not
    work0.15
  • Bevent that missile thruster 2 will not work
    0.15
  • P(A or B or both A and B) P(A)P(B)-P(A)P(B)
    0.150.15-(0.15)(0.15)
    0.30-0.0225 0.2775
  • P(A or B but not both A and B)
    P(A)P(B)-2P(A)P(B) 0.15 0.15 0.045
    0.255

9
Problem 2
  • Let A man age gt 50 has prostate cancer, B
    PSA score gt5.0, Say on the demographical region
    of interest P(A) 0.3 that is the male
    population over 50 has a 30 chance of having
    prostate cancer.
  • Now I take a blood test called a PSA test and
    that test supposedly helps make a decision about
    the presence of cancer in the prostate. A score
    of 5.5 is considered in the medium to high zone
    as an indicator (actually doctors look at rate of
    increase of PSA over time).
  • The PSA test has the following probabilities
    associated with the test result P(BA)0.9,
    P(BA) 0.2 i.e. even if you do not have cancer
    the test registers PSAgt5.0 about 20 of the time
    (false positive), Thus the probability of getting
    a PSA score gt 5.0 is given by P(B)P(BA)P(A)P(B
    A)P(A) (0.9)(0.3)(0.2)(0.7)0.41
  • Using Bayes Theorem P(AB) P(BA)P(A)/P(B)(0.9)
    (0.3)/0.410.66
  • Your probability of having prostate cancer has
    gone from 30 with no knowledge of test to 66
    given your knowledge that your PSA gt 5.0

10
Bayesian in a nutshell
  • Bayesian models have two parts the likelihood
    function and the prior distribution.
  • We construct the likelihood function from the
    sampling distribution of the data, which
    describes the probability of observing the data
    before the experiment is performed, e.g. binomial
    distribution P(sn,R). This sampling distribution
    is called the observational model. After we
    perform the experiment and observe the data, we
    can consider the sampling distribution as a
    function of the unknown parameter(s), e.g.
    (Rn,s) Rs(1-R)n-s. This function is called the
    likelihood function. The parameters in the
    observational model (R) are themselves modeled by
    what is called a structural model e.g..
  • The prior distribution describes the uncertainty
    about the parameters of the likelihood function
    (R Nm, p) RNm p(1-R)Nm (1-p). The parameters
    Nm and p are called hyperparameters (the
    parameter in the original problem is R, the
    reliability itself)
  • HIERARCHICAL MODELS Sometimes the parameters,
    e.g. Nm and p, are not known and have their own
    distributions known as hyperhyperdistributions or
    hyperpriors which have their own parameters e.g.
    Nm GAM( a, k). This is more complex but can lead
    to better answers.
  • We update the prior distribution to the posterior
    distribution after observing data. We use Bayes'
    Theorem to perform the update, which shows that
    the posterior distribution is computed (up to a
    proportionality constant) by multiplying the
    likelihood function by the prior distribution.
  • (Posterior of parameters) (Likelihood function)
    X (prior of parameters)

Now we need to study each part of this equation
starting with the likelihood
11
Binomial Distribution for the Data
  • Probability of an event occurring p
  • Probability of event not occurring 1-p
  • Probability that event occurs x2 times in n5
    tests
  • 1st instantiation pp(1-p)(1-p)(1-p) p2(1-p)3
    (multiplication rule)
  • 2nd instantiation p(1-p)p(1-p)(1-p) p2(1-p)3
  • 10th instantiation (1-p)(1-p)(1-p)pp p2(1-p)3
  • In general the number of ways of having 2
    successes in 5 trials is given by 5!/(3!2!) 10
    number of combinations of selecting two items
    out of ten.
  • nCr n!/((n-r)!r!)

12
Binomial distribution
  • Since any of the 10 instantiations would give the
    same outcome (i.e. 2 successes in 5 trials each
    with probability p2(1-p)(5-2)) we can determine
    the total probability using the addition rule for
    mutually exclusive events.
  • P(1st Instantiation) P(2nd instantiation)
    P() P(10th instantiation) 5C2 p2(1-p)(5-2)
  • P(xn, p) nCx px(1-p)(n-x)
  • The random variable is x, the number of
    successful events

13
binomial distributionrandom variable is x, the
number of successes
14
Likelihood Function
  • If we have pass/fail data the binomial
    distribution is called the sampling distribution
    however, after the experiments or trials are
    performed and the outcomes are known we can look
    upon this sampling distribution as a continuous
    function of the variable p. There can be many
    values of p that can lead to the same set of
    outcomes (e.g. x events in n trials). The core of
    the binomial distribution -- that part which
    includes p -- is called a likelihood function.
  • L(px, n) likelihood that the probability of a
    successful event is p given the data x and n.
  • L(px, n) px (1-p)(n-x)
  • The random variable is now p with x and n known

The Likelihood function is put to great use in
Bayesian statistics
15
Likelihood Function, L(ps,n)The random variable
is p, the probability of a success
Sometimes we label p as R, the reliability
16
Likelihood Functions What they look like!
1 Test and 0 Successes
1 Test and 1 Success
Likelihood function normalized to area under
function 1. This is not necessary but allows
for a better interpretation
17
Likelihood Functions for n2 tests
2 successes
0 successes
1 success
18
Bayesian Approach
  • Use Bayes Theorem with probability density
    functions e.g.
  • P(AB) gt fposterior(parameterdata) P(BA) gt
    Likelihood(data parameter) P(A) gt
    fprior(parameter before take data)
  • fposterior ? Likelihood X fprior
  • Steps
  • 1. determine a prior distribution for parameters
    of interest based on all relevant information
    before (prior to) taking data from tests.
  • 2. perform tests and insert results into
    appropriate Likelihood function.
  • 3. multiply prior distribution times Likelihood
    function to find posterior distribution.
  • Remember we are determining the distribution
    function of one or more population parameters of
    interest.
  • In pass/fail tests the parameter of interest is
    R, the reliability itself.
  • In time dependent (Weibull) reliability analysis
    it is a and b, the scale and shape parameters
    ,for which we need to generate posterior
    distributions.

19
Summary on Probability
  • Learned 3 rules of probability. Multiplication,
    addition and Bayes.
  • Derived the binomial distribution for probability
    of x events in n (independent) trials.
  • Defined a Likelihood function for pass/fail data
    by changing the random variable in the binomial
    distribution to the probability p with known data
    (s,n).
  • Illustrated Bayes Theorem with probability
    density functions.

20
Reliability
  • General Bayesian Approach
  • Reliability is NOT a constant parameter. It has
    uncertainty that must be taken into account. The
    language of variability is statistics.
  • Types of reliability tests
  • Pass/Fail tests
  • Time dependent tests (exponential Weibull)
  • Counting experiments (Poisson statistics)

21
Bayes Theorem for Distributionswhen using
pass/fail data
  • fposterior(Rn, s, info)L(s, nR) X fprior(R
    info)
  • Example For pass/fail data
  • fprior(R) ? Ra-1(1-R)b-1 (beta)
  • L(s,nR) ? Rs(1-R)n-s (binomial)
  • fposterior(R s, n, a, b) ? Rsa-1(1-R)n-sb-1
    (beta-binomial)

Much more will be said of this formulation later
22
Prior, Likelihood, PosteriorWhat they look like!
You want the information that comes from knowing
the green curve!
23
Bayesian Calculator
In the following charts I want to take you
through a series of What if cases using priors
of different strengths and show how the data
overpowers the prior and leads to the classical
result when the data is sufficiently
plentiful. You will see that, in general, the
Bayesian analysis keeps you from being over
optimistic or over pessimistic when the amount of
data is small.
24
Bayesian gives lower probabilities for high
success and higher values for low success.
Bayesian keeps one from being too pessimistic or
too optimistic
25
Formulas Used In Beta-binomial model for
pass/fail data
  • Prior Distribution
  • Likelihood
  • Posterior

The prior uses what I call the Los Alamos
parameterization of the beta distribution. See
Appendix for more detail
26
p0.9, Nm10
Take tests n4 and resulting successes s3
Note how strong prior distribution (Nm 10 which
is large) governs the posterior distribution when
there are a small number of data points. The
posterior has a maximum value at R0.857 where a
classical calculation would give ltRgt3/40.75.
27
p0.9, Nm2
Take tests n4 and resulting successes s3
Note how weak prior distribution (Nm2) will
still pull the posterior up that its net effect
is much smaller i.e. the posterior has a maximum
value at R0.800 where a classical calculation
would give ltRgt3/40.75. So the posterior and
likelihood are more closely aligned.
28
p0.9, Nm10
Increase tests Take tests n20 and resulting
successes s15
Note how strong prior distribution still pulls
the posterior up but due to the larger amount of
data (n20) the posterior distribution still
shows a maximum value at R0.800 so the prior is
beginning to be overpowered by the data.
29
p0.9, Nm2
Take tests n20 and resulting successes s15
Note how weak prior distribution still has some
effect but due to the larger amount of data
(n20) the posterior distribution is almost
coincident with the likelihood (i.e. the data)
and shows a maximum value at R0.764 which is
close to the data modal value of 0.75. The data
is overpowering the prior.
30
Nm0 uniform prior
Take tests n20 and resulting successes s15
Note how uniform prior distribution has no effect
on the posterior which is now exactly coincident
with the likelihood function and is essentially
the result you obtain using classical analyses.
Posterior mean R0.750 . The data is all we have.
Note that p plays no role because all values of R
are equally likely when Nm0. This is NEVER a
useful prior to use because it brings no
information to the analysis and it is this
additional information that drove us to use
Bayesian analyses in the first place.
31
Look at Predictions i.e. successes in the next
10 tests
p0.9, Nm10, n20, s15, ltRgt15/200.75
The Bayesian calculation shows that due to the
strong prior (Nm10) the prediction of success is
higher for large numbers of successes as compared
to the use of the average reliability in the
binomial calculation
p0.9, Nm10, n4, s3, ltRgt3/40.75
32
Given these Bayesian results how does one choose
a prior distribution?
  • This is the most often asked question both by new
    practitioners and by customers.
  • The answer is actually it depends.
  • Depends on how you wish to represent previous
    knowledge
  • Depends on your estimation technique for
    determining the hyperparameters (p,Nm) e.g. what
    degree of confidence, Nm ,you have in the
    estimation of p.
  • What are some possible methods for guessing a p
    value and setting an Nm value?
  • Are there other shapes of priors that one could
    use? YES
  • What if we have uncertainty in Nm?
  • Set up a hyperprior distribution for Nm say a
    gamma distribution.
  • The parameters of f(Nm) i.e. (h,d) are called
    hyperhyperparameters.
  • How do you choose the parameters d and h for the
    gamma distribution? Pick mode for Nm (d-1)/h
    and stdev of Nm d1/2 / h.
  • Example for a weak prior we might choose the
    mode 3 for Nm, and a stdev 2, which gives
    h1, d4

33
Now it gets complicated!
  • Up until now we have been able to solve for
    fposterior(R) analytically but once we have a
    reasonable prior for Nm we no longer can do this.
    Now the joint prior for R and Nm given p, h, and
    d is shown below
  • There is no analytical solution for fposterior(R)
    since one now must integrate over Nm.
  • This is one of the reasons that Bayesian has not
    been widely accepted in a community that is
    looking for simple recipes to calculate
    reliability.

Will apply MCMC to find fposterior(R), learn this
in part-2 of lectures
34
Summary
  • We see how Bayesian analysis for pass/fail data
    can be analyzed rather easily for specific forms
    of prior distributions --- called conjugate
    priors.
  • We see how data overcomes the prior distribution
    and the amount of data depends on the strength
    of the prior.
  • We have ended with the formulation of a more
    complex Bayesian analysis problem that will
    require some form of numerical technique for its
    solution

Numerical Techniques will be the subject of Part
2 of these lectures
35
Time Dependent Reliability Analysis
  • Consider the case of a constant failure rate
    time-to-first-failure distribution i.e.
    f(t)lexp(-lt)
  • In classical analysis we look up values for l for
    all the parts then perform a weighted sum of the
    l values over all the components in the system
    (series system) to arrive at a total constant
    failure rate, lT, for the system. We can then use
    lT to find the reliability at a given time t
    using R(t)exp(-lTt).
  • What if there is uncertainty in the l values that
    go into finding lT? How do we handle that
    variability?

Applying a prior distribution to the parameter l
is the Bayesian process
36
Time Dependent Reliability Analysis
  • Another technique applies when we have
    time-to-first-failure data i.e. say we have n
    units under test and the test is scheduled to
    last for time tR. When conducting these tests
    say r units fail and we do not replace the failed
    units. The failure times are designated by ti,
    i1,2,,r and (n-r) units do not fail by time tR.
  • The likelihood function for this set of n tests
    is given by L lexp(-lt1) lexp(-lt2)
    lexp(-ltr) X exp(-ltr1) exp(-ltr2)
    exp(-ltn) lr exp(-l(t1t2tr (n-r)tR))
    lr exp(-l(TTT)), TTTtotal time on test.
  • In classical analysis we differentiate the above
    equation w.r.t. the parameter l and set the
    derivative 0 and solve for the l value that
    maximizes L (or more easily maximize ln(L)). The
    value so obtained, lMLE r/TTT, is called the
    Maximum Likelihood Estimate for the population
    parameter l.Note For this distribution ONLY the
    estimator for the failure rate does not depent on
    the number of units on test except through TTT.
  • Again this technique assumes there is a fixed but
    unknown value for l and it is estimated using the
    MLE method.
  • But in real life there is uncertainty in l so we
    need to discuss some distribution of possible l
    values i.e. find a prior and posterior
    distribution for l and let the data tell us about
    the variability. The variability in l shows up
    as a variability in R since Rexp(-lt). So for
    any given time say tt1 we will have a
    distribution of R values and that will be the
    same distribution but with lower mean value for
    later times. If l itself changed with time then
    we have added as yet another complication.

Applying a prior distribution to the parameter l
is the Bayesian process
37
Exponential distribution
  • Start with an assumed form for the prior
    distribution for l.
  • One possible choice that allows for l to vary
    quite a bit is the gamma distribution. Gamma(a,b)
  • With reasonable choices for a and b this
    distribution allows for l to range over a wide
    range of values.
  • Likelihood is given by

38
Exponential time-to-failure DistributionModel
variation of l with Gamma distribution
  • Multiplying Prior X Likelihood gives
  • So the prior is a Gamma ( a, b) distribution and
    the Likelihood is the product of exponential
    reliabilities for n tests run for long enough to
    get n failures and so the we know all n failure
    times. The posterior turns out to also be a
    Gamma(na, bTTT), TTT(t1t2tn)total time on
    test.
  • The problem that may occur here is in the
    evaluation of the Gamma posterior distribution
    for large arguments. One may need to use MatLab
    instead of excel.

39
R(t)
  • To find the reliability as a function of time one
    must integrate over l from 0 to 8, i.e.
  • Again this can be integrated only for special
    values of the parameters but evaluation for large
    arguments of the Gamma distribution may require
    top of the line software.

This will be evaluated later.
40
Weibull distribution(See Hamada, et al. Chapter
4, section 4)
  • Once we have dealt with the exponential
    distribution then the next logical step is to
    look at the Weibull distribution that has two
    parameters (a, b) instead of the single parameter
    (l) for the exponential distribution. Now lets
    address a counting problem which is very typical
    of logistics analysis. With two parameters we
    will need a two variable or joint prior
    distribution fprior(a, b) which in some cases can
    be modeled by the product fa,prior(a)fb,prior(b)
    if the parameters can be shown to be independent.
  • Even if independence cannot be proven one almost
    always uses the product to make the priors
    useable for modeling purposes.

41
Summary of time dependent Bayesian Reliability
Modeling
  • It has been shown how to set of a typical
    time-to-first-failure model.
  • The principals are the same as for any Bayesian
    reliability model and there will be a
    distribution of probability of failure vs time
    for each time.
  • These distributions in parameters lead to
    distributions in reliabilities and in principal
    one can graph say 90 credible intervals for each
    time of interest and these intervals should be
    smaller than classical intervals that use
    approximate normal or even nonparametric bounds.

42
Discussion of Hierarchal Models
  • This follows closely the paper by Allyson Wilson
    3
  • Hierarchical models are one of the central tools
    of Bayesian analysis.
  • Denote the sampling distribution as f(yq) e.g.
    f(sR) binomial distribution, and the prior
    distribution as g(qa) where a represents the
    parameters of the prior distribution (often
    called hyperparameters) e.g. g(RNm,p) beta
    distribution.
  • It may be the case that we know g(qa)
    completely, including a specific value for a.
    However, suppose that we do not know a, and that
    we choose to quantify our uncertainty about a
    using a distribution h(a) (often called the
    hyperprior), e.g. h(Nm) since we know p but do
    not know Nm.

43
General Form for Bayesian Problem
  • 1. The observational model for the data.
  • (Yi q) ? f(yi q) i 1, , k
  • 2. The structural model for the parameters of the
    likelihood.
  • (qa) ? g(qa)
  • 3. The hyperparameter model for the parameters of
    the structural model.
  • a ? h(a)

44
Observational Model
  • In the pass/fail example used early in this
    presentation the observational model was the
    binomial distribution f(s R) Rs (1-R)n-s where
    the random variable is the number of successes,
    s.
  • In this particular model the parameter of
    interest, R, also happens to be the result we are
    seeking. In other problems e.g. time to failure
    problem using a Weibull distribution as the
    observational model we have a and b as parameters
    and we construct R(ta,b) exp(-(t/a)b), which
    is the result we are seeking.

45
Structural Model
  • The structural model addresses the variability of
    the parameter q to a set of hyperparameters a.
  • In our pass/fail example we have a beta
    distribution for our structural model or prior,
    e.g. f(Rp,Nm) RNmp(1-R)Nm(1-p)
  • This parameterization of the beta distribution is
    not the standard form which is typically written
    as Ra-1(1-R)b-1 but I have chosen to use the Los
    Alamos parameterization for its convenience of
    interpretation (a Nmp1, b Nm(1-p)1),
    because p mode of the distribution and Nm is a
    weighting factor indicating the confidence we
    have in the assumed p value.

46
Hyperparameter Model
  • This is the distribution(s) used to model the
    breadth of allowable values for the
    hyperparameters themselves.
  • In our pass/fail example the hyperparameters were
    p and Nm. But I only used a hyperprior for Nm
    just for illustration purposes, e.g. gamma
    distribution Nm GAM( h, d)
  • Solution requires numerical techniques that will
    be discussed in Part-2 of these lectures.

47
References
  • Bayesian Reliability, by Hamada, Wilson, Reese
    and Martz, Wiley (2007)
  • Bayesian Reliability Analysis, by Martz Waller,
    Wiley (1995)
  • Hierarchical MCMC for Bayesian System
    Reliability, Los Alamos Report eqr094, June
    2006.
  • Statistical Methods for Reliability Data by
    Meeker Escobar, Wiley, (1998), Chapter 3

48
Appendices
49
Definitions etc.
50
Gamma and Beta Functions
  • The complete beta function is defined by
  • The gamma function is defined by

51
Classical Reliability
52
Bayes Theorem Applied to Reliability as measured
by Pass/Fail tests.
  • Assume we are to perform a battery of tests the
    result of each test is either a pass (x0) or a
    fail (x1).
  • Traditionally we would run say n tests and record
    how many passed, s, and then calculate the
    proportion that passed s/n and label this as the
    reliability. i.e. ltRgts/n. This is called a point
    estimate of the reliability.
  • Then we would use this value of R to calculate
    the probability of say s future successes in n
    future tests, using the binomial probability
    distribution p(sn,ltRgt)nCs
    (ltRgt)s(1-ltRgt)(n-s)
  • This formulation assumes R is some fixed but
    unknown constant that is estimated by the most
    recent pass/fail data e.g. ltRgt.
  • ltRgt is bounded (Clopper-Pearson Bounds) see next
    set of charts.

53
Classical Reliability analysis
  • Classical approach to Pass/Fail reliability
    analysis
  • To find a confidence interval for R using any
    estimator e.g. ltRgt, one needs to know the
    statistical distribution for the estimator. In
    this case it is the binomial distribution.

54
Classical Reliability
  • For example we know that if the number of tests
    was fairly large that the binomial distribution
    (the sample distribution for ltRgt) can be
    approximated by a normal distribution whose mean
    ltRgt and whose standard deviation, called the
    standard error of the mean SEM (ltRgt(1-ltRgt)/n)1/2
    and therefore the 1-sided confidence interval for
    R is written out in terms of a probability
    statement as follows
  • As an example suppose there were n6 tests
    (almost too small a number to use a normal
    approximation, and s5 successes, ltRgt5/6, take
    a0.05 for a 95 lower reliability bound, Z1-a
    1.645, SEM 0.152 so one has
  • This is a very wide interval as there were very
    few tests.

55
Classical Reliability
  • This is a fairly wide confidence interval which
    is to be expected with so few tests. The
    interval can only be made narrower by performing
    more tests (increase n and hopefully s) or
    reducing the confidence level from say 95 to say
    80. Running the above calculation at a0.2
    gives
  •  
  • This is the standard (frequentist) reliability
    approach. Usually folks leave out the confidence
    interval because it looks so bad when the number
    of tests is low.
  •  
  • Actually n6 tests does not qualify as a very
    large sample for a binomial distribution. In
    fact of we perform an exact calculation using the
    cumulative binomial distribution (using the RMS
    tool SSE.xls) one finds for a confidence level
    of 95. At a confidence level of 80
  • These non parametric exact values give
    conservatively large (but more accurate) answers
    when the number of tests is small.
  • The expression for the nonparametric confidence
    interval can be found using the RMS tools that
    are used to compute sample size (e.g. SSE.xlsm
    whose snapshot is shown below). Tools available
    for download from eRoom at http//ds.rms.ray.com/d
    s/dsweb/View/Collection-102393 look in excel
    files for SSE.xlsm.

56
Raytheon Tools
"if all you have is a hammer, everything looks
like a nail. Abraham Maslow, 1966
57
Sample Size Estimator, SSE.xlsm
In SSE the gold bar can be shifted under any of
the 4 boxes (Trials, Max Failures, Prob of Test
Failure, Confidence Level) by double clicking
mouse button in cell below the number you are
trying to find. The gold bar indicates what will
be calculated. You will need to allow macro
operation in excel in order for this calculator
to work. Another choice would be the Exact
Confidence Interval for Classical Reliability
calculation from binary data excel spreadsheet
which is available from the author by email
request. The actual equations that are
calculated can be found in reference by Meeker
Escobar 4 or an alternative form can be copied
from the formulas below. The two sided bounds on
the reliability confidence interval for a
confidence level 1-a are given by
RLower(a,n,s) BETAINV(1-a/2,s,n-s1) and
RUpper(a,n,s) BETAINV(a/2,s1,n-s) where n
tests, ssuccesses in n tests, and CLconfidence
level1-a. The function BETAINV is in excel. For
a one sided calculation which is applicable for
ns (x0) calculations one can use a instead of
a/2 in RLower equation.
58
Exact 2-sided Confidence Interval
The yellow inserts show the binomial calculation
needed to evaluate the confidence bounds but
these binomial sums can be rewritten either in
terms of Inverse Beta distributions or Inverse F
distributions. Stated as a probability form one
has
59
Bayesian Search Algorithm
An example of how Bayesian techniques aided in
the search for a lost submarine
  • View Backup Charts on Search for Submarine

60
Backup ChartsBayes Scorpion Example
  • From Cressie and Wikle (2011) Statistics for
    Spatio-Temporal Data

61
(No Transcript)
62
(No Transcript)
63
(No Transcript)
64
(No Transcript)
65
(No Transcript)
66
(No Transcript)
67
(No Transcript)
68
(No Transcript)
69
(No Transcript)
70
(No Transcript)
About PowerShow.com