Ensuring the Dependability of Software Systems - PowerPoint PPT Presentation

1 / 44
About This Presentation
Title:

Ensuring the Dependability of Software Systems

Description:

CUSEC 2004. Many Causes of Failures. The specification may be wrong or have a missing requirement ... CUSEC 2004. Ariane 5 Lessons Learned ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 45
Provided by: Carl143
Category:

less

Transcript and Presenter's Notes

Title: Ensuring the Dependability of Software Systems


1
Ensuring the Dependability of Software Systems
  • Dr. Lionel Briand, P. Eng.
  • Canada Research Chair (Tier I)
  • Software Quality Engineering Lab.
  • Carleton University, Ottawa

2
Carleton SE Programs
  • Accredited B. Eng. In Software Engineering
  • Full course on verification and validation
  • Full course on software quality management
  • Graduate studies
  • SQUALL lab http//www.sce.carleton.ca/Squall/
  • Supported by a CRC chair

3
Objectives
  • Overview
  • Main practical issues
  • Focus on testing
  • Current solutions and research
  • Future research agenda

4
Outline
  • Background
  • Issues
  • Test strategies
  • Testability
  • Test Automation
  • Conclusions and future work

5
Outline
  • Background
  • Issues
  • Test strategies
  • Testability
  • Test Automation
  • Conclusions and future work

6
Dependability
  • Dependability Correctness, reliability, safety,
    robustness
  • Correct but not safe or robust the specification
    is inadequate
  • Reliable but not correct failures happen rarely
  • Safe but not correct annoying failures may
    happen
  • Robust but not safe catastrophic failures are
    possible

7
Improving Dependability
Fault Handling
Fault Avoidance
Fault Tolerance
Fault Detection
Atomic Transactions
Modular Redundancy
Inspections
Design Methodology
Verification
Configuration Management
Testing
Debugging
Component Testing
Integration Testing
System Testing
Correctness Debugging
Performance Debugging
8
Testing Process Overview
SW Representation
Tests
SW Code
Tests
Results
Expected Results
Compare
Oracle
9
Many Causes of Failures
  • The specification may be wrong or have a missing
    requirement
  • The specification may contain a requirement that
    is impossible to implement given the prescribed
    software and hardware
  • The system design may contain a fault
  • The program code may be wrong

10
Design descriptions
System functional specifications
Other software specifications
User environment
Customer requirements
Component code
Tested component
. . .
Integrated modules
Functioning system
Verified, validated software
Accepted system
Tested component
Component code
SYSTEM IN USE!
Pfleeger, 1998
11
Practice
  • No systematic test strategies
  • Very basic tools (e.g., capture and replay test
    executions)
  • No clear test processes, with explicit objectives
  • Poor testability
  • But a substantial part of the development effort
    (between 30 and 50) spent on testing
  • SE must become an engineering practice

12
Ariane 5 ESA Launcher
13
Ariane 5 Root Cause
  • Source ARIANE 5 Flight 501 Failure, Report by
    the Inquiry Board
  • A program segment for converting a floating
    point number to a signed 16 bit integer was
    executed with an input data value outside the
    range representable by a signed 16 bit integer.
    This run time error (out of range, overflow),
    which arose in both the active and the backup
    computers at about the same time, was detected
    and both computers shut themselves down. This
    resulted in the total loss of attitude control.
    The Ariane 5 turned uncontrollably and
    aerodynamic forces broke the vehicle apart. This
    breakup was detected by an on-board monitor which
    ignited the explosive charges to destroy the
    vehicle in the air. Ironically, the result of
    this format conversion was no longer needed after
    lift off.

14
Ariane 5 Lessons Learned
  • Rigorous reuse procedures, including usage-based
    testing (based on operational profiles)
  • Adequate exception handling strategies (backup,
    degraded procedures?)
  • Clear, complete, documented specifications (e.g.,
    preconditions, post-conditions)
  • Note this was not a complex, computing problem,
    but a deficiency of the software engineering
    practices in place

14
15
Outline
  • Background
  • Issues
  • Test strategies
  • Testability
  • Test Automation
  • Conclusions and future work

16
Software Characteristics
  • No matter how rigorous we are, software is going
    to be faulty
  • No exhaustive testing possible based on
    incomplete testing, we must gain confidence that
    the system has the desired behavior
  • Small differences in operating conditions will
    not result in dramatically different behavior No
    continuity property.
  • Dependability needs vary

17
Testing Requirements
  • Effective at uncovering faults
  • Help locate faults for debugging
  • Repeatable so that a precise understanding of the
    fault can be gained and corrections can be
    checked
  • Automated so as to lower the cost and timescale
  • Systematic so as to be predictable

18
Our Focus
  • Test strategies How to systematically test
    software?
  • Testability What can be done to ease testing?
  • Test Automation What makes test automation
    possible?

19
Outline
  • Background
  • Issues
  • Test strategies Their Empirical Assessment
  • Testability
  • Test Automation
  • Conclusions and research

20
Test Coverage
Software Representation(Model)
Associated Criteria
Test cases must cover all the in the model
Test Data
  • Representation of
  • the specification ? Black-Box Testing
  • the implementation ? White-Box Testing

21
Empirical Testing Principle
  • Impossible to determine consistent and complete
    test criteria from theory
  • Exhaustive testing cannot be performed in
    practice
  • Therefore we need test strategies that have been
    empirically investigated
  • A significant test case is a test case with high
    error detection potential it increases our
    confidence in the program correctness
  • The goal is to run a sufficient number of
    significant test cases that number should be as
    small as possible

22
Empirical Methods
  • Controlled Experiments (e.g., in university
    settings)
  • High control on application of techniques
  • Small systems tasks
  • Case studies (e.g., on industrial projects)
  • Realism
  • Practical issues, little control
  • Simulations
  • Large number of test sets can be generated
  • More refined analysis (statistical variation)
  • Difficult to automate, validity?

23
Test Evaluation based on Mutant Programs
  • Take a program and test data generated for that
    program
  • Create a number of similar programs (mutants),
    each differing from the original in one small
    way, i.e., each possessing a fault
  • E.g., replace addition operator by multiplication
    operator
  • The test data are then run through the mutants
  • If test data detect differences in mutants, then
    the mutants are said to be dead, otherwise live.
  • A mutant remains live either because it is
    equivalent to the original program (functionally
    identical though syntactically different
    equivalent mutant) or the test set is inadequate
    to kill the mutant
  • Evaluation in terms of mutation score

24
Simulation Process 1
25
Simulation Process 2
26
Cruise Control System
27
Transition TreeCover all Round-trip paths
28
Transition Tree Simulation Results

29
Comparing Criteria
 

30
Outline
  • Background
  • Issues
  • Test strategies
  • Testability
  • Test Automation
  • Conclusions and future work

31
Testability
  • Controllability ability to put an object in a
    chosen state (e.g., by test driver) and to
    exercise its operations with input data
  • Observability ability to observe the outputs
    produced in response to a supplied test input
    sequence (where outputs may denote not only the
    output values returned by one operation, but also
    any other effect on the object environment calls
    to distant features, commands sent to actuators,
    deadlocks )
  • These dimensions will determine the cost,
    error-proneness, and effectiveness of testing

32
Basic Techniques
  • Get/Set methods in class interfaces
  • Assertions checked at run time
  • State / Class invariants
  • Pre-conditions
  • Post-conditions
  • Equality Methods Provides ability to report
    whether two object are equal not as simple as
    it seems
  • Message sequence checking methods Detect
    run-time violations of the classs state
    specifications
  • Testability depends in part on
  • Coding standards
  • Design practice
  • Availability of code instrumentation and analysis
    tools

33
Early Fault Detection and Diagnosis
Baudry et al, 2001
34
Ocl2j An AOP-based Approach
UML Model
Stage 1 Contract Code Generation
Ocl2jAspect
ocl2j Tool
Program Bytecode
Stage 2 Program Instrumentation
Instrumented Bytecode
AspectJ Compiler
Developed at Carleton University, SQUALL
35
Contract Assertions and Debugging
36
Outline
  • Background
  • Issues
  • Test strategies
  • Testability
  • Test Automation
  • Conclusions and future work

37
Objectives
  • Test plans should be derived from specification
    design documents
  • This helps avoid errors in the test planning
    process and helps uncover problems in the
    specification design
  • With additional code analysis and suitable coding
    standards, test drivers can eventually be
    automatically derived
  • There is a direct link between the quality of
    specifications design and the testability of
    the system
  • Test automation may be an additional motivation
    for model-driven development (e.g., UML-based)

38
Performance stress testing
  • Performance stress testing to automate, based
    on the system task architecture, the derivation
    of test cases that maximize the chances of
    critical deadline misses within real-time systems

time
Event 1
Periodic tasks
Event 1
Event 2
Genetic Algorithm


Event 1
Aperiodic tasks
System
Event 2
Test case
39
Optimal Integration Orders
  • Briand and Labiche use Genetic Algorithms to
    identify optimal integration orders (minimize
    stubbing effort). In OO systems.
  • Most classes in OO systems have dependency
    cycles, sometimes many of them.
  • Integration order has a huge impact on the
    integration cost Cost of stubbing classes
  • How to decide of an optimal integration order?
    This is a combinatorial optimization (under
    constraints) problem.
  • Solutions for the TSP cannot be reused verbatim.

40
Example Jakarta ANT
41
Results
  • We obtain, most of the time, (near) optimal
    orders, I.e., orders that minimize stubbing
    efforts
  • GA can handle, with reasonable results, the most
    complex cases we have been able to find (e.g., 45
    classes, 294 dependencies, gt 400000 dependency
    cycles).
  • The GA approach is flexible in the sense that it
    is easy to tailor the objective/fitness function,
    add new constraints on the order, etc.

42
Further Automation
  • Meta-heuristic algorithms Genetic algorithms,
    simulated annealing
  • Generate test data based on constraints
  • Structural testing
  • Fault-based testing
  • Testing exception conditions
  • Analyze specifications (e.g., contracts)
  • Specification flaws (satisfy precondition and
    violate postcondition)

43
Conclusions
  • There are many opportunities to apply
    optimization and search techniques to help test
    automation
  • Devising cost-effective testing techniques
    requires experimental research
  • Achieving high testability requires
  • Good analysis and instrumentation tools
  • Good specification and design practices

44
Thank you
  • Questions?
  • (en français or English)
Write a Comment
User Comments (0)
About PowerShow.com