On Using Soft Computing Techniques in Software Reliability Engineering - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

On Using Soft Computing Techniques in Software Reliability Engineering

Description:

Kohonen (Self Organized Map) ART Adaptive Resonance Theory. Background (4) ... ESREL 2005, Tri City Poland, 27-30 June 2005 ... – PowerPoint PPT presentation

Number of Views:349
Avg rating:3.0/5.0
Slides: 22
Provided by: www2I4
Category:

less

Transcript and Presenter's Notes

Title: On Using Soft Computing Techniques in Software Reliability Engineering


1
On Using Soft Computing Techniques in Software
Reliability Engineering
  • H. Madsen P. Thyregod (DK),
  • B. Burtschy (FR),
  • F. Popentiu G. Albeanu (RO)

2
Content
  • Introduction
  • Background
  • Soft Computing Techniques in Software Reliability
    Modeling Fuzzy Modeling, Neural Networks,
    Evolutionary Computing
  • A Soft Computing Framework for Software
    Reliability Engineering
  • Practical Experience
  • Conclusions
  • References

3
Introduction
... exploit the tolerance for imprecision,
uncertainty, and partial truth to achieve
tractability, robustness, low solution cost, and
better rapport with reality. Zadeh (1994)
Soft computing is tolerant of imprecision,
uncertainty, partial truth, and
approximation. Fuzzy Logic and Probabilistic
Reasoning based on knowledge driven
reasoning Neural Computing and Evolutionary
Computation as data-driven search and
optimization approaches.
The principal constituents of the
soft computing field.
4
Background (1)Soft computing Illustrative
Applications
SC Technologies Neural Nets (NN) Fuzzy Logic
(FL) Probabilistic Reasoning (PR) Genetic
Algorithms (GA) Hybrid Systems
Applications Classification Monitoring / Anomaly
detection Diagnostics Prognostics,
Configuration / Initialization Prediction
Quality assessment Equipment Life
Estimation Scheduling Time / Resource
Assignments Control Machine / Process Control
Process Initialization Supervisory
Control DSS/Auto-decision making Cost / Risk
analysis revenue optimization
Related Technologies Statistics (Stat)
Artificial Intelligence Case-base reasoning
Rule-based Expert Systems Machine Learning
Bayesian Belief Networks
5
Background (2)
The SC constituents are complementary rather
than competitive.
6
Background (3)
RBF - Radial Basis Function
SOM Kohonen (Self Organized Map)
ART Adaptive Resonance Theory
7
Background (4)
8
Software Reliability Engineering
Theory operational profiles, random process
software reliability models, statistical
estimation, and sequential sampling theory.
  • According to Musa J. (1999),
  • Software reliability engineering is the only
    standard, proven best practice that empowers
    testers and developers to simultaneously
  • Ensure that product reliability meets user needs
  • Speed the product to market faster
  • Reduce product cost
  • Improve customer satisfaction and reduce the risk
    of angry users
  • Increase their productivity

Activities define necessary reliability,
develop operational profiles, prepare for test,
execute test, and apply failure data to guide
decisions.
9
FUZZY LOGIC IN SOFTWARE RELIABILITY ENGINEERING
Software fault diagnosis fault detection and
isolation, and fault analysis. (?) The detection
of faults can be realized by testing and
debugging in a fuzzy approach. Motivation
present SRGMs that incorporates the debugging
process assume that when a failure occurs, a
debugging effort takes place which may or may not
remove the fault with some unknown but estimable
probabilities. However, there are a large variety
of bugs in a software and the debugging activity
should be treated as a fuzzy process rather than
crisp.
10
This is why a fault is debugged according to a
set of membership functions labeling the
debugging performance as         total
imperfect debugging,          more or less
imperfect debugging,          less than perfect
debugging,          ,          total perfect
debugging. In this context some software
reliability measures can be used         the
total fuzzy number of faults          the
total fuzzy number of faults that remains in the
software, and          the expected proportion
of faults removed from the software.
11
FUZZY PRINCIPLES FOR TIME-SERIES FORECASTING
A look-up table approach (x1, x2, ...., xm y),
in five steps The first step Define the
fuzzy partitions for the input and output
variables. The second step Generate one fuzzy
rule for each input-output pair, and obtain an
initial fuzzy rule-base. The third step
Calculate the membership degree (D) for each
fuzzy rule belonging to the fuzzy rule-base
created in the previous step. The fourth step
Removing inconsistent and redundant rules and,
create the final fuzzy rule-base. A reliability
factor (RF) can be computed for every set of k
rules with the same antecedent part by the ratio
k1/k, where k1 is the number of redundant rules.
In this manner, an effective degree (ED) can be
computed for each rule degree as ED DRF. In
the final rule-base will contain fuzzy rules with
the largest effective degrees. The fifth step
Select the inference scheme and perform the
fuzzy inference procedure. Select a
defuzzification scheme to provide a crisp value
as output.
12
A FUZZY NEAREST NEIGHBOUR METHOD (FNNM)
A five step procedure y (y1y2....yn) -
yn1 The first step Compute the proximity of
yn and past values ?(yi) (1d(yi, yn))-1, i
1, 2, ..., n-1, n ?(yi) shows the fuzzy
proximity of yi and yn, and d is the Euclidian
distance between yi and yn. The second step
Scale the membership (proximity) values within
the 0, 1 interval ?'(yi) (?(yi) a) / (b
a), where a min ?(yi), i 1, 2, ..., n-1, b
max ?(yi), i 1, 2, ..., n-1. The third
step Select the nearest neighbours ?'(yi) ?
?. A total of k selected neighbours are denoted
as xt1, xt2, ..., xtk. The fourth
step Forecast yn1 by averaging the k nearest
neighbours. The fifth step Optimize the
method (select an optimal ?) for minimal error in
prediction, considering the actual value yn1.
13
A SOFT COMPUTING FRAMEWORK FOR SOFTWARE
RELIABILITY ENGINEERING
MONITOR to collect information about the
evolution of the whole software system, including
the environment.
The module STAT-statistical approach allows
filtering the data supplied by the former module
and produces a model that can be used for
prediction. The structure of this module includes
both parametric and non-parametric statistical
resources and artificial Neural Networks
utilizing information on the state of the
operational environment such as proportional
hazards. It support also fuzzy approaches in time
series processing.
14
The module PRED-predictions makes predictions
about the future failure behavior of the
application and the operational environment.
Moreover, the predictions are used to calibrate
the statistical models.
The module SEL-selection uses the information
of the prediction module to select the most
appropriate algorithm for adaptability of fault
management. A comparative analysis of the
predictions allows us to choose between the
pessimistic approach to favor the recovery delay
and the optimistic one to reduce the cost of
fault management under the failure - free
execution.
15
RKD The Reliability Knowledge Discovery module
will search information for testing hypothesis
about the system and to discover some patterns in
data.
OPTIM for optimal reliability allocation for
large software projects to minimize the total
cost of achieving a target reliability to
maximize the system reliability subject to a
budget constraint
16
The FGCemQ Experience
The FGCemQ software allows to a large number of
variables to be implemented, the limited only by
the type of variables and the the computers
system performances. Initial the fuzzy models are
built and the options are set up for the
calculation of the optimum. This is realized on a
set of initial data. As new data are appearing in
the process the fuzzy model is better. At the
beginning, the optimization algorithms are
applied to the initial set of data.
17
How to build the fuzzy model?
18
The optimization Scheme
19
FGCemQ Interface
20
Conclusions
  • The neural network will describe very well what
    has happened but it fails to forecast what will
    happen in time. Even, in theory, any continuous
    function can be approximated to any desired
    accuracy by a multi-layer neural network,
    practically, obtaining such an architecture is a
    difficult task.
  • An optimization module for reliability allocation
    is useful. However, it is better to use software
    engineering techniques to minimize the number of
    bugs in the software. That means the design and
    implementation of the software have to be
    monitored in order to satisfy the specified
    quality requirements, and appropriate strategies
    will be activated.

21
THANK YOU !
Write a Comment
User Comments (0)
About PowerShow.com