Fenton and Ohlsson, Quantitative analysis of faults and failures in a complex software system, IEEE - PowerPoint PPT Presentation

1 / 7
About This Presentation
Title:

Fenton and Ohlsson, Quantitative analysis of faults and failures in a complex software system, IEEE

Description:

Fenton and Ohlsson, Quantitative analysis of faults and failures in a complex ... Still a dearth of (good) empirical studies of quality and reliability of ... – PowerPoint PPT presentation

Number of Views:73
Avg rating:3.0/5.0
Slides: 8
Provided by: ghan2
Category:

less

Transcript and Presenter's Notes

Title: Fenton and Ohlsson, Quantitative analysis of faults and failures in a complex software system, IEEE


1
Fenton and Ohlsson, Quantitative analysis of
faults and failures in a complex software system,
IEEE Trans. Software Engineering, 26(8)797-814,
2000.
A11
  • Geir K. Hanssen

2
Background
  • Still a dearth of (good) empirical studies of
    quality and reliability of realistic software
    systems
  • Some existing studies contradicts popular SE
    beliefs, for example that larger modules have
    lower fault density than smaller ones (Basili and
    Pericone)
  • Need a better empirical basis!
  • Paper presents a study of faults and failures
    from two releases of a commercial software system
  • (Fault in code/system, failure an incident
    (caused by a fault))
  • Testing a range of hypothesis
  • The Pareto principle of distribution of faults
    and failures
  • The use of early fault data to predict later
    fault and failure data
  • Metrics for fault prediction
  • Benchmarking fault data
  • A small study with consequently reduced
    generalizability, but with valuable insights and
    guidelines for future studies

3
Context and data collection
  • Two major consecutive releases of a large legacy
    project at Ericsson Telecom AB
  • Release n and n1 with respectively 140 and 246
    modules (random samples)
  • The modules in the samples ranged in size from
    approximately 1,000 to 6,000 LOC
  • The dependent variable was the number of faults
    traced to unique module
  • Data collected from four phases
  • function test (FT)
  • system test (ST)
  • first 26 weeks at a number of site tests (SI)
  • first year (approximate) of operation after site
    tests (OP)
  • a failure is an observed deviation of the
    operational system behavior from specified or
    expected behavior

4
Data collection
  • Independent variables
  • Lines of code (LOC) as the main size measure
  • McCabe's cyclomatic complexity (CC)
  • Communication metric (SigFF) the count of the
    number of new and modified signals

5
Hypothesis being investigated
  • Hypotheses relating to the Pareto principle of
  • distribution of faults and failures
  • 1. Number of Modules.
  • a. A small number of modules contain most of the
    faults discovered during prerelease testing
  • b. If a small number of modules contain most of
    the faults discovered during prerelease testing,
    then this is simply because those modules
    constitute most of the code size.
  • 2. Number of Modules.
  • a. A small number of modules contain the faults
    that cause most failures
  • b.If a small number of modules contain most of
    the operational faults, then this is simply
    because those modules constitute most of the code
    size.
  • Hypotheses relating to the use of early fault
    data to predict later fault and failure data (at
    the module level)
  • 3. A higher incidence of faults in function
    testing
  • implies a higher incidence of faults in system
  • testing.
  • 4. A higher incidence of faults in prerelease
    testing implies higher incidence of failures in
    operation.

Hypotheses about metrics for fault
prediction 5. Simple size metrics, such as
Lines of Code (LOC), are good predictors of fault
and failure prone modules. 6. Complexity metrics
are better predictors than simple size metrics of
fault and failure-prone modules. Hypotheses
relating to benchmarking figures for quality in
terms of defect densities 7. Fault densities at
corresponding phases of testing and operation
remain roughly constant between subsequent major
releases of a software system. 8. Software
systems produced in similar environments have
broadly similar fault densities at similar
testing and operational phases.
6
Results
7
Discussion
  • Collecting fault data from different phases opens
    for statistical process control (as in the
    software factory approach)
  • May also be at help for evaluating testing
    strategies
  • Expected that The Pareto principles (1a and 2a)
    was supported, however surprising that 1b and 2b
    (code size) is not supported
  • Complexity is not better at predicting fault than
    simple size measures
  • It is not the case that the set of modules which
    are especially fault-prone prerelease are going
    to be roughly the same set of modules that are
    especially fault-prone postrelease (opposes the
    common view!)
  • Study limitation missing data on testing effort
  • The aim of complexity metrics is to predict
    postrelease faults, however this study show that
    there is no relationship between the modules
    which are fault-prone prerelease and the modules
    which are fault-prone postrelease
Write a Comment
User Comments (0)
About PowerShow.com