Software Reliability Modelling - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

Software Reliability Modelling

Description:

especially if test generators are used. Bogazi i ... Software reliability is a key part in software quality. Software reliability improvement is hard ... – PowerPoint PPT presentation

Number of Views:227
Avg rating:3.0/5.0
Slides: 46
Provided by: cmpeBo
Category:

less

Transcript and Presenter's Notes

Title: Software Reliability Modelling


1
Software Reliability Modelling
  • Engin Deveci

2
Agenda
  • What is Software Reliability?
  • Software Failure Mechanisms
  • Hardware vs Software
  • Measuring Software Reliability
  • Software Reliability Models
  • Statistical Testing
  • Conclusion

3
What is Software Reliability
  • the probability of failure-free software
    operation for a specified period of time in a
    specified environment
  • Probability of the product working correctly
    over a given period of time.
  • Informally denotes a products trustworthiness or
    dependability.

4
What is Software Reliability
  • Software Reliability is an important attribute of
    software quality, together with
  • functionality,
  • usability,
  • performance,
  • serviceability,
  • capability,
  • installability,
  • maintainability,
  • documentation.

5
What is Software Reliability
  • Software Reliability Modeling
  • Prediction Analysis
  • Reliability Measurement
  • Defect Classification
  • Trend Analysis
  • Field Data Analysis
  • Software Metrics
  • Software Testing and Reliability
  • Fault-Tolerance
  • Fault Trees
  • Software Reliability Simulation
  • Software Reliability Tools
  • From
  • Handbook of Software Reliability Engineering,
  • Edited by Michael R. Lyu, 1996

6
What is Software Reliability
  • Software Reliability is hard to achieve, because
    the complexity of software tends to be high.
  • While the complexity of software is inversely
    related to software reliability, it is directly
    related to other important factors in software
    quality, especially functionality, capability.

7
Software Failure Mechanisms
  • Failure cause Software defects are mainly design
    defects.
  • Wear-out Software does not have energy related
    wear-out phase. Errors can occur without warning.
  • Repairable system concept Periodic restarts can
    help fix software problems.
  • Time dependency and life cycle Software
    reliability is not a function of operational
    time.
  • Environmental factors Do not affect Software
    reliability, except it might affect program
    inputs.
  • Reliability prediction Software reliability can
    not be predicted from any physical basis, since
    it depends completely on human factors in design.

8
Software Failure Mechanisms
  • Redundancy Can not improve Software reliability
    if identical software components are used.
  • Interfaces Software interfaces are purely
    conceptual other than visual.
  • Failure rate motivators Usually not predictable
    from analyses of separate statements.
  • Built with standard components Well-understood
    and extensively-tested standard parts will help
    improve maintainability and reliability. But in
    software industry, we have not observed this
    trend. Code reuse has been around for some time,
    but to a very limited extent. Strictly speaking
    there are no standard parts for software, except
    some standardized logic structures.

9
Software Failure Mechanisms
10
Software Failure Mechanisms
11
Measuring Software Reliability
  • Dont define what you wont collect..
  • Dont collect what you wont analyse..
  • Dont analyse what you wont use..

12
Measuring Software Reliability
  • Measuring software reliability remains a
    difficult problem because we don't have a good
    understanding of the nature of software
  • Even the most obvious product metrics such as
    software size have not uniform definition.

13
Measuring Software Reliability
  • Current practices of software reliability
    measurement can be divided into four categories
  • Product metrics
  • Project management metrics
  • Process metrics
  • Fault and failure metrics

14
Measuring Software Reliability
  • Different categories of software products have
    different reliability requirements
  • level of reliability required for a software
    product should be specified in the SRS document.
  • A good reliability measure should be observer
    independent,
  • so that different people can agree on the
    reliability.

15
Measuring Software Reliability
  • LOC, KLOC, SLOC, KSLOC
  • McCabe's Complexity Metric
  • Test coverage metrics
  • ISO-9000 Quality Management Standards
  • MTBF
  • Once a failure occurs, the next failure is
    expected after 100 hours of clock time (not
    running time).

16
Measuring Software Reliability
  • Failure Classes
  • Transient
  • Transient failures occur only for certain inputs.
  • Permanent
  • Permanent failures occur for all input values.
  • Recoverable
  • When recoverable failures occur the system
    recovers with or without operator intervention.
  • Unrecoverable
  • the system may have to be restarted.
  • Cosmetic
  • May cause minor irritations. Do not lead to
    incorrect results.
  • Eg. mouse button has to be clicked twice instead
    of once to invoke a GUI function.

17
Measuring Software Reliability
  • Errors do not cause failures at the same
    frequency and severity.
  • measuring latent errors alone not enough
  • The failure rate is observer-dependent
  • No simple relationship observed between system
    reliability and the number of latent software
    defects.
  • Removing errors from parts of software which are
    rarely used makes little difference to the
    perceived reliability.
  • removing 60 defects from least used parts would
    lead to only about 3 improvement to product
    reliability.
  • Reliability improvements from correction of a
    single error depends on whether the error belongs
    to the core or the non-core part of the program.
  • The perceived reliability depends to a large
    extent upon how the product is used. In technical
    terms on its operation profile.

18
Software Reliability Models
  • Software reliability models have emerged as
    people try to understand the characteristics of
    how and why software fails, and try to quantify
    software reliability
  • Over 200 models have been developed since the
    early 1970s, but how to quantify software
    reliability still remains largely unsolved
  • There is no single model that can be used in all
    situations. No model is complete or even
    representative.

19
Software Reliability Models
  • Most software models contain  the following
    parts
  • assumptions,
  • factors,
  • a mathematical function
  • relates the reliability with the factors.
  • is usually higher order exponential or
    logarithmic.

20
Software Reliability Models
  • Software modeling techniques can be divided into
    two subcategories
  • prediction modeling
  • estimation modeling.
  • Both kinds of modeling techniques are based on
    observing and accumulating failure data and
    analyzing with statistical inference.

21
Software Reliability Models
ISSUES PREDICTION MODELS ESTIMATION MODELS
DATA REFERENCE Uses historical data Uses data from the current software development effort
WHEN USED IN DEVELOPMENT CYCLE Usually made prior to development or test phases can be used as early as concept phase Usually made later in life cycle(after some data have been collected) not typically used in concept or development phases
TIME FRAME Predict reliability at some future time Estimate reliability at either present or some future time
22
Software Reliability Models
  • There are two main types of uncertainty which
    render any reliability measurement inaccurate
  • Type 1 uncertainty
  • our lack of knowledge about how the system will
    be used, i.e.
  • its operational profile
  • Type 2 uncertainty
  • reflects our lack of knowledge about the effect
    of fault removal.
  • When we fix a fault we are not sure if the
    corrections are complete and successful and no
    other faults are introduced
  • Even if the faults are fixed properly we do not
    know how much will be the improvement to
    interfailure time.

23
Software Reliability Models
  • Step Function Model
  • The simplest reliability growth model
  • a step function model
  • The basic assumption
  • reliability increases by a constant amount each
    time an error is detected and repaired.
  • Assumes
  • all errors contribute equally to reliability
    growth
  • highly unrealistic
  • we already know that different errors contribute
    differently to reliability growth.

24
Software Reliability Models
  • Jelinski and Moranda Model
  • Realizes each time an error is repaired
    reliability does not increase by a constant
    amount.
  • Reliability improvement due to fixing of an error
    is assumed to be proportional to the number of
    errors present in the system at that time.

25
Software Reliability Models
  • Littlewood and Veralls Model
  • Assumes different fault have different sizes,
    thereby contributing unequally to failures.
  • Allows for negative reliability growth
  • Large sized faults tends to be detected and fixed
    earlier
  • As number of errors is driven down with the
    progress in test, so is the average error size,
    causing a law of diminishing return in debugging

26
Software Reliability Models
  • Variations exists
  • LNHPP (Littlewood non homogeneous Poisson
    process) model
  • Goel Okumoto (G-O) Imperfect debugging model
  • GONHPP
  • Musa Okumoto (M-O) Logarithmic Poisson
    Execution Time model

27
Software Reliability Models
  • Applicability of models
  • There is no universally applicable reliability
    growth model.
  • Reliability growth is not independent of
    application.
  • Fit observed data to several growth models.
  • Take the one that best fits the data.

28
Software Reliability Models
29
Software Reliability Models
30
Software Reliability Models
31
Software Reliability Models
  • Observed failure intensity can be computed in a
    straightforward manner from the tables of failure
    time or grouped data (e.g. Musa et al. 1987).

32
Software Reliability Models
  • Example (136 failures total)
  • Failure Times (CPU seconds) 3, 33, 146, 227,
    342, 351, 353,444, 556, 571, 709, 759, 836 ...,
    88682.
  • Data are grouped into sets of 5 and the observed
    intensity, cumulative failure distribution and
    mean failure times are computed, tabulated and
    plotted.

33
Software Reliability Models
34
Software Reliability Models
  • Two common models are the "basic execution time
    model and the "logarithmic Poisson execution
    time model" (e.g. Musa et al. 1987).

35
Software Reliability Models
  • Basic Execution Time Model
  • Failure intensity ?(t) with debugging time t
  • where ?0 is the initial intensity and ?0 is the
    total expected number of failures (faults).

36
Software Reliability Models
  • where µ(t) is the mean number of failures
    experienced by time t.

37
Software Reliability Models
  • In this case the Logarithmic-Poisson Model fits
    somewhat better than the Basic Execution Time
    Model.
  • In some other projects BE model fits better than
    LP model.

38
Software Reliability Models
  • Additional expected number of failures, ?µ, that
    must be experienced to reach a failure intensity
    objective
  • where ?P is the present failure intensity, and ?F
    is the failure intensity objective. The
    additional execution time, ?t, required to reach
    the failure intensity objective is

39
Software Reliability Models
  • After fitting a model describing the failure
    process we can estimate its parameters, and the
    quantities such as the total number of faults in
    the code, future failure intensity and additional
    time required to achieve a failure intensity
    objective.

40
Statistical Testing
  • The objective is to determine reliability rather
    than discover errors.
  • Uses data different from defect testing.

41
Statistical Testing
  • Different users have different operational
    profile
  • i.e. they use the system in different ways
  • formally, operational profile
  • probability distribution of input
  • Divide the input data into a number of input
    classes
  • e.g. create, edit, print, file operations, etc.
  • Assign a probability value to each input class
  • a probability for an input value from that class
    to be selected.

42
Statistical Testing
  • Determine the operational profile of the
    software
  • This can be determined by analyzing the usage
    pattern.
  • Manually select or automatically generate a set
    of test data
  • corresponding to the operational profile.
  • Apply test cases to the program
  • record execution time between each failure
  • it may not be appropriate to use raw execution
    time
  • After a statistically significant number of
    failures have been observed
  • reliability can be computed.

43
Statistical Testing
  • Relies on using large test data set.
  • Assumes that only a small percentage of test
    inputs
  • likely to cause system failure.
  • It is straight forward to generate tests
    corresponding to the most common inputs
  • but a statistically significant percentage of
    unlikely inputs should also be included.
  • Creating these may be difficult
  • especially if test generators are used.

44
Conclusions
  • Software reliability is a key part in software
    quality
  • Software reliability improvement is hard
  • There are no generic models.
  • Measurement is very important for finding the
    correct model.
  • Statistical testing should be used but it is not
    easy again
  • Software Reliability Modelling is not as simple
    as described here. ?

45
  • Thank you.
  • QA
Write a Comment
User Comments (0)
About PowerShow.com