Title: Ensuring the Dependability of Software Systems
1Ensuring the Dependability of Software Systems
- Dr. Lionel Briand, P. Eng.
- Canada Research Chair (Tier I)
- Software Quality Engineering Lab.
- Carleton University, Ottawa
2Carleton SE Programs
- Accredited B. Eng. In Software Engineering
- Full course on verification and validation
- Full course on software quality management
- Graduate studies
- SQUALL lab http//www.sce.carleton.ca/Squall/
- Supported by a CRC chair
3Objectives
- Overview
- Main practical issues
- Focus on testing
- Current solutions and research
- Future research agenda
4Outline
- Background
- Issues
- Test strategies
- Testability
- Test Automation
- Conclusions and future work
5Outline
- Background
- Issues
- Test strategies
- Testability
- Test Automation
- Conclusions and future work
6Dependability
- Dependability Correctness, reliability, safety,
robustness - Correct but not safe or robust the specification
is inadequate - Reliable but not correct failures happen rarely
- Safe but not correct annoying failures may
happen - Robust but not safe catastrophic failures are
possible
7Improving Dependability
Fault Handling
Fault Avoidance
Fault Tolerance
Fault Detection
Atomic Transactions
Modular Redundancy
Inspections
Design Methodology
Verification
Configuration Management
Testing
Debugging
Component Testing
Integration Testing
System Testing
Correctness Debugging
Performance Debugging
8Testing Process Overview
SW Representation
Tests
SW Code
Tests
Results
Expected Results
Compare
Oracle
9Many Causes of Failures
- The specification may be wrong or have a missing
requirement - The specification may contain a requirement that
is impossible to implement given the prescribed
software and hardware - The system design may contain a fault
- The program code may be wrong
10Design descriptions
System functional specifications
Other software specifications
User environment
Customer requirements
Component code
Tested component
. . .
Integrated modules
Functioning system
Verified, validated software
Accepted system
Tested component
Component code
SYSTEM IN USE!
Pfleeger, 1998
11Practice
- No systematic test strategies
- Very basic tools (e.g., capture and replay test
executions) - No clear test processes, with explicit objectives
- Poor testability
- But a substantial part of the development effort
(between 30 and 50) spent on testing - SE must become an engineering practice
12Ariane 5 ESA Launcher
13Ariane 5 Root Cause
- Source ARIANE 5 Flight 501 Failure, Report by
the Inquiry Board - A program segment for converting a floating
point number to a signed 16 bit integer was
executed with an input data value outside the
range representable by a signed 16 bit integer.
This run time error (out of range, overflow),
which arose in both the active and the backup
computers at about the same time, was detected
and both computers shut themselves down. This
resulted in the total loss of attitude control.
The Ariane 5 turned uncontrollably and
aerodynamic forces broke the vehicle apart. This
breakup was detected by an on-board monitor which
ignited the explosive charges to destroy the
vehicle in the air. Ironically, the result of
this format conversion was no longer needed after
lift off.
14Ariane 5 Lessons Learned
- Rigorous reuse procedures, including usage-based
testing (based on operational profiles) - Adequate exception handling strategies (backup,
degraded procedures?) - Clear, complete, documented specifications (e.g.,
preconditions, post-conditions) - Note this was not a complex, computing problem,
but a deficiency of the software engineering
practices in place
14
15Outline
- Background
- Issues
- Test strategies
- Testability
- Test Automation
- Conclusions and future work
16Software Characteristics
- No matter how rigorous we are, software is going
to be faulty - No exhaustive testing possible based on
incomplete testing, we must gain confidence that
the system has the desired behavior - Small differences in operating conditions will
not result in dramatically different behavior No
continuity property. - Dependability needs vary
17Testing Requirements
- Effective at uncovering faults
- Help locate faults for debugging
- Repeatable so that a precise understanding of the
fault can be gained and corrections can be
checked - Automated so as to lower the cost and timescale
- Systematic so as to be predictable
18Our Focus
- Test strategies How to systematically test
software? - Testability What can be done to ease testing?
- Test Automation What makes test automation
possible?
19Outline
- Background
- Issues
- Test strategies Their Empirical Assessment
- Testability
- Test Automation
- Conclusions and research
20Test Coverage
Software Representation(Model)
Associated Criteria
Test cases must cover all the in the model
Test Data
- Representation of
- the specification ? Black-Box Testing
- the implementation ? White-Box Testing
21Empirical Testing Principle
- Impossible to determine consistent and complete
test criteria from theory - Exhaustive testing cannot be performed in
practice - Therefore we need test strategies that have been
empirically investigated - A significant test case is a test case with high
error detection potential it increases our
confidence in the program correctness - The goal is to run a sufficient number of
significant test cases that number should be as
small as possible
22Empirical Methods
- Controlled Experiments (e.g., in university
settings) - High control on application of techniques
- Small systems tasks
- Case studies (e.g., on industrial projects)
- Realism
- Practical issues, little control
- Simulations
- Large number of test sets can be generated
- More refined analysis (statistical variation)
- Difficult to automate, validity?
23Test Evaluation based on Mutant Programs
- Take a program and test data generated for that
program - Create a number of similar programs (mutants),
each differing from the original in one small
way, i.e., each possessing a fault - E.g., replace addition operator by multiplication
operator - The test data are then run through the mutants
- If test data detect differences in mutants, then
the mutants are said to be dead, otherwise live. - A mutant remains live either because it is
equivalent to the original program (functionally
identical though syntactically different
equivalent mutant) or the test set is inadequate
to kill the mutant - Evaluation in terms of mutation score
24Simulation Process 1
25Simulation Process 2
26Cruise Control System
27Transition TreeCover all Round-trip paths
28Transition Tree Simulation Results
29Comparing Criteria
30Outline
- Background
- Issues
- Test strategies
- Testability
- Test Automation
- Conclusions and future work
31Testability
- Controllability ability to put an object in a
chosen state (e.g., by test driver) and to
exercise its operations with input data - Observability ability to observe the outputs
produced in response to a supplied test input
sequence (where outputs may denote not only the
output values returned by one operation, but also
any other effect on the object environment calls
to distant features, commands sent to actuators,
deadlocks ) - These dimensions will determine the cost,
error-proneness, and effectiveness of testing
32Basic Techniques
- Get/Set methods in class interfaces
- Assertions checked at run time
- State / Class invariants
- Pre-conditions
- Post-conditions
- Equality Methods Provides ability to report
whether two object are equal not as simple as
it seems - Message sequence checking methods Detect
run-time violations of the classs state
specifications - Testability depends in part on
- Coding standards
- Design practice
- Availability of code instrumentation and analysis
tools
33Early Fault Detection and Diagnosis
Baudry et al, 2001
34Ocl2j An AOP-based Approach
UML Model
Stage 1 Contract Code Generation
Ocl2jAspect
ocl2j Tool
Program Bytecode
Stage 2 Program Instrumentation
Instrumented Bytecode
AspectJ Compiler
Developed at Carleton University, SQUALL
35Contract Assertions and Debugging
36Outline
- Background
- Issues
- Test strategies
- Testability
- Test Automation
- Conclusions and future work
37Objectives
- Test plans should be derived from specification
design documents - This helps avoid errors in the test planning
process and helps uncover problems in the
specification design - With additional code analysis and suitable coding
standards, test drivers can eventually be
automatically derived - There is a direct link between the quality of
specifications design and the testability of
the system - Test automation may be an additional motivation
for model-driven development (e.g., UML-based)
38Performance stress testing
- Performance stress testing to automate, based
on the system task architecture, the derivation
of test cases that maximize the chances of
critical deadline misses within real-time systems
time
Event 1
Periodic tasks
Event 1
Event 2
Genetic Algorithm
Event 1
Aperiodic tasks
System
Event 2
Test case
39Optimal Integration Orders
- Briand and Labiche use Genetic Algorithms to
identify optimal integration orders (minimize
stubbing effort). In OO systems. - Most classes in OO systems have dependency
cycles, sometimes many of them. - Integration order has a huge impact on the
integration cost Cost of stubbing classes - How to decide of an optimal integration order?
This is a combinatorial optimization (under
constraints) problem. - Solutions for the TSP cannot be reused verbatim.
40Example Jakarta ANT
41Results
- We obtain, most of the time, (near) optimal
orders, I.e., orders that minimize stubbing
efforts - GA can handle, with reasonable results, the most
complex cases we have been able to find (e.g., 45
classes, 294 dependencies, gt 400000 dependency
cycles). - The GA approach is flexible in the sense that it
is easy to tailor the objective/fitness function,
add new constraints on the order, etc.
42Further Automation
- Meta-heuristic algorithms Genetic algorithms,
simulated annealing - Generate test data based on constraints
- Structural testing
- Fault-based testing
- Testing exception conditions
- Analyze specifications (e.g., contracts)
- Specification flaws (satisfy precondition and
violate postcondition)
43Conclusions
- There are many opportunities to apply
optimization and search techniques to help test
automation - Devising cost-effective testing techniques
requires experimental research - Achieving high testability requires
- Good analysis and instrumentation tools
- Good specification and design practices
44Thank you
- Questions?
- (en français or English)