System Testing - PowerPoint PPT Presentation

Loading...

PPT – System Testing PowerPoint presentation | free to view - id: 8376a9-NTMwN



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

System Testing

Description:

Rob Oshana Southern Methodist University Software Testing – PowerPoint PPT presentation

Number of Views:203
Avg rating:3.0/5.0
Slides: 266
Provided by: osha90
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: System Testing


1
Rob Oshana Southern Methodist University
Software Testing
2
Why do we Test ?
  • Assess Reliability
  • Detect Code Faults

3
Industry facts
  • 30-40 of errors detected after deployment are
    run-time errors U.C. Berkeley,
    IBMs TJ Watson Lab
  • The amount of software in a typical device
    doubles every 18 months Reme
    Bourguignon, VP of Philips Holland
  • Defect densities are stable over the last 20
    years 0.5 - 2.0 sw failures / 1000 lines
    Cigital Corporation
  • Software testing accounts for 50 of pre-release
    costs,and 70 of post-release costs
    Cigital Corporation

4
Critical SW Applications
  • Critical software applications which have failed
  • Mariner 1 NASA 1962Missing - in
    ForTran code Rocket bound for Venus
    destroyed
  • Therac 25 Atomic Energy of Canada Ltd
    1985-87Data conversion error Radiation
    therapy machine for cancer
  • Long Distance Service ATT 1990A single line of
    bad code Service outages up to nine hours long
  • Patriot Missiles U.S. military 1991Endurance
    errors in tracking system 28 US soldiers
    killed in barracks
  • Tax Calculation Program InTuit 1995Incorrect
    results SW vendor payed tax penalties for users

5
Good and successful testing
  • What is a good test case?
  • A good test case has a high probability of
    finding an as-yet undiscovered error
  • What is a successful test case?
  • A successful test is one that uncovers an as-yet
    undiscovered error

6
Who tests the software better ?
Must learn about the system, but, will attempt to
break it and, is driven by quality
Understands the system but, will test
gently and, is driven by delivery
7
Testability can you develop a program for
testability?
  • Operability - The better it works, the more
    efficiently it can be tested
  • Observability - the results are easy to see,
    distinct output is generated for each input,
    incorrect output is easily identified
  • Controllability - processing can be controlled,
    tests can be automated reproduced
  • Decomposability - software modules can be tested
    independently
  • Simplicity - no complex architecture and logic
  • Stability - few changes are requested during
    testing
  • Understandability - program is easy to understand

8
(No Transcript)
9
Did You Know...
  • Testing/Debugging can worsen reliability?
  • We often chase the wrong bugs?
  • Testing cannot show the absence of faults, only
    the existence?
  • The cost to develop software is directly
    proportional to the cost of testing?
  • Y2K testing cost 600 billion

10
(No Transcript)
11
Did you also know...
  • The most commonly applied software testing
    techniques (black box and white box) were
    developed back in the 1960s
  • Most Oracles are human (error prone)!!
  • 70 of safety critical code can be exceptions
  • this is the last code written!

12
Testing Problems
  • Time
  • Faults hides from tests
  • Test Management costs
  • Training Personnel
  • What techniques to use
  • Books and education

13
Errors are more common, more pervasive, and more
troublesome in software than with other
technologies
  • David Parnas

14
What is testing?
  • How does testing software compare with testing
    students?

15
What is testing?
  • Software testing is the process of comparing the
    invisible to the ambiguous as to avoid the
    unthinkable. James Bach, Borland corp.

16
What is testing?
  • Software testing is the process of predicting the
    behavior of a product and comparing that
    prediction to the actual results." R. Vanderwall

17
Purpose of testing
  • Build confidence in the product
  • Judge the quality of the product
  • Find bugs

18
Finding bugs can be difficult
A path through the mine field (use case)
A path through the mine field (use case)
Mine field
x
x
x
x
x
x
x
x
x
x
x
19
Why is testing important?
  • Therac25 Cost 6 lives
  • Ariane 5 Rocket Cost 500M
  • Denver Airport Cost 360M
  • Mars missions, orbital explorer polar lander
    Cost 300M

20
Why is testing so hard?
21
Reasons for customer reported bugs
  • User executed untested code
  • Order in which statements were executed in actual
    use different from that during testing
  • User applied a combination of untested input
    values
  • Users operating environment was never tested

22
Interfaces to your software
  • Human interfaces
  • Software interfaces (APIs)
  • File system interfaces
  • Communication interfaces
  • Physical devices (device drivers)
  • controllers

23
Selecting test scenarios
  • Execution path criteria (control)
  • Statement coverage
  • Branching coverage
  • Data flow
  • Initialize each data structure
  • Use each data structure
  • Operational profile
  • Statistical sampling.

24
What is a bug?
  • Error mistake made in translation or
    interpretation ( many taxonomies exist to
    describe errors)
  • Fault manifestation of the error in
    implementation (very nebulous)
  • Failure observable deviation in behavior of the
    system

25
Example
  • Requirement print the speed, defined as
    distance divided by time
  • Code s d/t print s

26
Example
  • Error I forgot to account for
  • t 0
  • Fault omission of code to catch t0
  • Failure exception is thrown

27
Severity taxonomy
  • Mild - trivial
  • Annoying - minor
  • Serious - major
  • Catastrophic - Critical
  • Infectious - run for the hills

What is your taxonomy ?
IEEE 1044-1993
28
Life cycle
Testing and repair process can be just as error
prone as the development Process (more so ??)
Errors can be introduced at each of these stages
Requirements
Resolve
error
error
Design
Isolate
error
error
Code
Classify
error
error
Testing
error
29
Ok, so lets just design our systems with
testability in mind..
30
Testability
  • How easily a computer program can be tested
    (Bach)
  • We can relate this to design for testability
    techniques applied in hardware systems

31
JTAG
A standard Integrated Circuit
Boundary Scan cells
Boundary Scan path
Data out
Core IC Logic
cell
TDI
TDO
I/O pads
Data in
Test data out (TDO)
Test data in (TDI)
Test access port controller
Test mode Select (TMS)
Test clock (TCK)
32
Operability
  • The better it works, the more efficiently it can
    be tested
  • System has few bugs (bugs add analysis and
    reporting overhead)
  • No bugs block execution of tests
  • Product evolves in functional stages
    (simultaneous development and testing)

33
Observability
  • What you see is what you get
  • Distinct output is generated for each input
  • System states and variables are visible and
    queriable during execution
  • Past system states are .. (transaction logs)
  • All factors affecting output are visible

34
Observability
  • Incorrect output is easily identified
  • Internal errors are automatically detected
    through self-testing mechanisms
  • Internal errors are automatically reported
  • Source code is accessible

35
Visibility Spectrum
End customer visibility
Factory visibility
GPP visibility
DSP visibility
36
Controllability
  • The better we can control the software, the more
    the testing can be automated and optimized
  • All possible outputs can be generated through
    some combination of input
  • All code is executable through some combination
    of input

37
Controllability
  • SW and HW states and variables can be controlled
    directly by the test engineer
  • Input and output formats are consistent and
    structured

38
Decomposability
  • By controlling the scope of testing, we can more
    quickly isolate problems and perform smarter
    testing
  • The software system is built from independent
    modules
  • Software modules can be tested independently

39
Simplicity
  • The less there is to test, the more quickly we
    can test it
  • Functional simplicity (feature set is minimum
    necessary to meet requirements)
  • Structural simplicity (architecture is
    modularized)
  • Code simplicity (coding standards)

40
Stability
  • The fewer the changes, the fewer the disruptions
    to testing
  • Changes to the software are infrequent,
    controlled, and do not invalidate existing tests
  • Software recovers well from failures

41
Understandability
  • The more information we have, the smarter we
    will test
  • Design is well understood
  • Dependencies between external, internal, and
    shared components are well understood
  • Technical documentation is accessible, well
    organized, specific and detailed, and accurate

42
Bugs lurk in corners and congregate at
boundaries
  • Boris Beizer

43
Types of errors
  • What is a Testing error?
  • Claiming behavior is erroneous when it is in fact
    correct
  • fixing this type of error actually breaks the
    product

44
Errors in classification
  • What is a Classification error ?
  • Classifying the error into the wrong category
  • Why is this bad ?
  • This puts you on the wrong path for a solution

45
Example Bug Report
  • Screen locks up for 10 seconds after submit
    button is pressed
  • Classification 1 Usability Error
  • Solution may be to catch user events and present
    an hour-glass icon
  • Classification 2 Performance error
  • solution may be a modification to a sort
    algorithm (or visa-versa)

46
Isolation error
  • Incorrectly isolating the erroneous modules
  • Example consider a client server architecture.
    An improperly formed client request results in an
    improperly formed server response
  • The isolation determined (incorrectly) that the
    server was at fault and was changed
  • Resulted in regression failure for other clients

47
Resolve errors
  • Modifications to remediate the failure are
    themselves erroneous
  • Example Fixing one fault may introduce another

48
What is the ideal test case?
  • Run one test whose output is "Modify line n of
    module i."
  • Run one test whose output is "Input Vector v
    produces the wrong output"
  • Run one test whose output is "The program has a
    bug" (Useless, we know this)

49
More realistic test case
  • One input vector and expected output vector
  • A collection of these make of a Test Suite
  • Typical (naïve) Test Case
  • Type or select a few inputs and observe output
  • Inputs not selected systematically
  • Outputs not predicted in advance

50
Test case definition
  • A test case consists of
  • an input vector
  • a set of environmental conditions
  • an expected output.
  • A test suite is a set of test cases chosen to
    meet some criteria (e.g. Regression)
  • A test set is any set of test cases

51
Testing Software Intensive Systems
52
VV
  • Verification
  • are we building the product right?
  • Validation
  • are we building the right product?
  • is the customer satisfied?
  • How do we do it?
  • Inspect and Test

53
What do we inspect and test?
  • All work products!
  • Scenarios
  • Requirements
  • Designs
  • Code
  • Documentation

54
Defect Testing
55
A Testing Test
  • Problem
  • A program reads three integer values from the
    keyboard separated by spaces. The three values
    are interpreted as representing the lengths of
    the sides of a triangle. The program prints a
    message that states whether the triangle is
    scalene, isosceles or equilateral.
  • Write a set of test cases to adequately test this
    program

56
Static and Dynamic VV
Requirementsspecification
High-Leveldesign
Detaileddesign
Program
Prototype
57
Techniques
  • Static Techniques
  • Inspection
  • Analysis
  • Formal verification
  • Dynamic Techniques
  • Testing

58
SE-CMMPA 07 Verify Validate System
  • Verification perform comprehensive evaluations
    to ensure that all work products meet
    requirements
  • Address all work products from user needs an
    expectations through production and maintenance
  • Validation - meeting customer needs - continues
    throughout product lifecycle

59
VV Base Practices
  • Establish plans for VV
  • objectives, resources, facilities, special
    equipment
  • come up with master test plan
  • Define the work products to be tested
    (requirements, design, code) and the methods
    (reviews, inspections, tests) that will be used
    to verify
  • Define verification methods
  • test case input, expected results, criteria
  • connect requirements to tests

60
VV Base Practices...
  • Define how to validate the system
  • includes customer as user/operator
  • test conditions
  • test environment
  • simulation conditions
  • Perform VV and capture results
  • inspection results test results exception
    reports
  • Assess success
  • compare results against expected results
  • success or failure?

61
Testing is...
  • The process of executing a program with the
    intent of finding defects
  • This definition implies that testing is a
    destructive process - often going against the
    grain of what software developers do, i.e..
    construct and build software
  • A successful test run is NOT one in which no
    errors are found

62
Test Cases
  • A successful test case finds an error
  • An unsuccessful test case is one that causes the
    program to produce the correct result
  • Analogy feeling ill, going to the doctor, paying
    300 for a lab test only to be told that youre
    OK!

63
Testing demonstrates the presence not the absence
of faults
64
Iterative Testing Process
User Testing
Component Testing
Integration Testing
65
It is impossible to completely test a program
66
Testing and Time
  • Exhaustive testing is impossible for any program
    with low to moderate complexity
  • Testing must focus on a subset of possible test
    cases
  • Test cases should be systematically derived, not
    random

67
Testing Strategies
  • Top Down testing
  • use with top-down programming stubs required
    difficult to generate output
  • Bottom Up testing
  • requires driver programs often combined with
    top-down testing
  • Stress testing
  • test system overload often want system to
    fail-soft rather than shut down
  • often finds unexpected combinations of events

68
Test-Support Tools
  • Scaffolding
  • code created to help test the software
  • Stubs
  • a dummied-up low-level routine so it can be
    called by a higher level routine

69
Stubs Can Vary in Complexity
  • Return, no action taken
  • Test the data fed to it
  • Print/echo input data
  • Get return values from interactive input
  • Return standard answer
  • Burn up clock cycles
  • Function as slow, fat version of ultimate routine

70
Driver Programs
  • Fake (testing) routine that calls other real
    routines
  • Drivers can
  • call with fixed set of inputs
  • prompt for input and use it
  • take arguments from command line
  • read arguments from file
  • main() can be a driver - then remove it with
    preprocessor statements. Code is unaffected

71
System Tests Should be Incremental
Modules
A
test 1test 2
B
72
System Tests Should be Incremental
Modules
A
test 1test 2 test 3
B
73
System Tests Should be Incremental
Modules
A
test 1test 2 test 3test 4
B
Not Big-Bang
74
Approaches to Testing
  • White Box testing
  • based on the implementation - the structure of
    the code also called structural testing
  • Black Box testing
  • based on a view of the program as a function of
    Input and Output also called functional testing
  • Interface Testing
  • derived from program specification and knowledge
    of interfaces

75
White Box (Structural) Testing
  • Testing based on the structure of the code

if x then j 2else k 5...
start with actual program code
76
White Box (Structural) Testing
  • Testing based on the structure of the code

Test data
Tests
if x then j 2else k 5...
Test output
77
White Box TechniqueBasis Path Testing
  • Objective-
  • test every independent execution path through the
    program
  • If every independent path has been executed then
    every statement will be executed
  • All conditional statements are tested for both
    true and false conditions
  • The starting point for path testing is the flow
    graph

78
Flow Graphs
if then else
79
Flow Graphs
if then else
loop while
80
Flow Graphs
if then else
loop while
case of
81
How many paths thru this program?
  • 1) j 2
  • 2) k 5
  • 3) read (a)
  • 4) if a2
  • 5) then j a
  • 6) else j ak
  • 7) a a 1
  • 8) j j 1
  • 9) print (j)

82
How many paths thru this program?
1) j 2 2) k 5 3) read (a) 4) if a2 5)
then j a 6) else j ak 7) a a 1 8) j
j 1 9) print (j)
7,8,9
83
How Many Independent Paths?
  • An independent path introduces at least one new
    statement or condition to the collection of
    already existing independent paths
  • Cyclomatic Complexity (McCabe)
  • For programs without GOTOs,
  • Cyclomatic Complexity Number of decision nodes
    1

also called predicate nodes
84
The Number of Paths
  • Cyclomatic Complexity gives an upper bound on the
    number of tests that must be executed in order to
    cover all statements
  • To test each path requires
  • test data to trigger the path
  • expected results to compare against

85
1) j 2 2) k 5 3) read (a) 4) if a2 5)
then j a 6) else j ak 7) a a 1 8) j
j 1 9) print (j)
7,8,9
86
Test 1input 2 expected output 3
1) j 2 2) k 5 3) read (a) 4) if a2 5)
then j a 6) else j ak 7) a a 1 8) j
j 1 9) print (j)
7,8,9
87
Test 1input 2 expected output 3
Test 2input 10 expected output 51
1) j 2 2) k 5 3) read (a) 4) if a2 5)
then j a 6) else j ak 7) a a 1 8) j
j 1 9) print (j)
7,8,9
88
What Does Statement Coverage Tell You?
so?
  • All statements have been executed at least once

89
What Does Statement Coverage Tell You?
  • All statements have been executed at least once

Coverage testing may lead to the false illusion
that the software has been comprehensively tested
90
The Downside of Statement Coverage
  • Path testing results in the execution of every
    statement
  • BUT, not all possible combinations of paths thru
    the program
  • There are an infinite number of possible path
    combinations in programs with loops

91
The Downside of Statement Coverage
  • The number of paths is usually proportional to
    program size making it useful only at the unit
    test level

92
Black Box Testing
93
Forget the code details!
94
Forget the code details!
Treat the program as a Black Box
In
Out
95
Black Box Testing
  • Aim is to test all functional requirements
  • Complementary, not a replacement for White Box
    Testing
  • Black box testing typically occurs later in the
    test cycle than white box testing

96
Defect Testing Strategy
Input Test Data
Locate inputs causingerroneous output
Output
Output indicating defects
97
Black Box Techniques
  • Equivalence partitioning
  • Boundary value testing

98
Equivalence Partitioning
  • Data falls into categories
  • Positive and Negative Numbers
  • Strings with without blanks
  • Programs often behave in a comparable way for all
    values in a category -- also called an
    equivalence class

99
invalid input
valid input
System
100
Choose test cases from partitions
invalid input
valid input
System
101
Specification determines Equivalence Classes
  • Program accepts 4 to 8 inputs
  • Each is 5 digits greater than 10000

102
Specification determines Equivalence Classes
  • Program accepts 4 to 8 inputs
  • Each is 5 digits greater than 10000

less than 4
4 thru 8
more than 8
103
Specification determines Equivalence Classes
  • Program accepts 4 to 8 inputs
  • Each is 5 digits greater than 10000

104
Specification determines Equivalence Classes
  • Program accepts 4 to 8 inputs
  • Each is 5 digits greater than 10000

105
Specification determines Equivalence Classes
  • Program accepts 4 to 8 inputs
  • Each is 5 digits greater than 10000

106
Boundary Value Analysis
  • Complements equivalence partitioning
  • Select test cases at the boundaries of a class
  • Range Boundary a..b
  • test just below a and just above b
  • Input specifies 4 values
  • test 3 and 5
  • Output that is limited should be tested above and
    below limits

107
Other Testing Strategies
  • Array testing
  • Data flow testing
  • GUI testing
  • Real-time testing
  • Documentation testing

108
Arrays
  • Test software with arrays of one value
  • Use different arrays of different sizes in
    different tests
  • Derive tests so that the first, last and middle
    elements are tested

109
Data Flow testing
  • Based on the idea that data usage is at least as
    error-prone as control flow
  • Boris Beizer claims that at least half of all
    modern programs consist of data declarations and
    initializations

110
Data Can Exist in One of Three States
  • Defined
  • initialized, not used
  • a 2
  • Used
  • x a b c
  • z sin(a)
  • Killed
  • free (a)
  • end of for loop or block where is was defined

111
Entering Exiting
  • Terms describing context of a routine before
    doing something to a variable
  • Entered
  • control flow enter the routine before variable is
    acted upon
  • Exited
  • control flow leaves routine immediately after
    variable is acted upon

112
Data Usage Patterns
  • Normal
  • define variable use one or more times perhaps
    killed
  • Abnormal Patterns
  • Defined-Defined
  • Defined-Exited
  • if local, why?
  • Defined-Killed
  • wasteful if not strange

113
More Abnormal Patterns
  • Entered-Killed
  • Entered-Used
  • should be defined before use
  • Killed-Killed
  • double kills are fatal for pointers
  • Killed-Used
  • why really are you using?
  • Used-Defined
  • whats its value?

114
Define-Use Testing
  • if (condition-1)
  • x a
  • else
  • x b
  • if (condition-2)
  • y x 1
  • else
  • y x - 1

Path Testing Test1 condition-1
TRUE condition-2 TRUE Test2 condition-1
FALSE condition-2 FALSE WILL EXERCISE EVERY LINE
OF CODE BUT will NOT test the DEF-USE
combinations xa / y x-1 xb/ y x 1
115
GUIs
  • Are complex to test because of their event driven
    character
  • Windows
  • move, resized and scrolled
  • regenerate when overwritten and then recalled
  • menu bars change when window is active
  • multiple window functionality available?

116
GUI.. Menus
  • Menu bars in context?
  • Submenus - listed and working?
  • Are names self explanatory?
  • Is help context sensitive?
  • Cursor changes with operations?

117
Testing Documentation
  • Great software with lousy documentation can kill
    a product
  • Documentation testing should be part of every
    test plan
  • Two phases
  • review for clarity
  • review for correctness

118
Documentation Issues
  • Focus on functional usage?
  • Are descriptions of interaction sequences
    accurate
  • Examples should be used
  • Is it easy to find how to do something
  • Is there trouble shooting section
  • Easy to look up error codes
  • TOC and index

119
Real-Time Testing
  • Needs white box and black box PLUS
  • consideration of states, events, interrupts and
    processes
  • Events will often have different effects
    depending on state
  • Looking at event sequences can uncover problems

120
Real-Time Issues
  • Task Testing
  • test each task independently
  • Event testing
  • test each separately then in context of state
    diagrams
  • scenario sequences and random sequences
  • Intertask testing
  • Ada rendezvous
  • message queuing buffer overflow

121
Other Testing Terms
  • Statistical testing
  • running the program against expected usage
    scenarios
  • Regression testing
  • retesting the program after modification
  • Defect testing
  • trying to find defects (aka bugs)
  • Debugging
  • the process of discovering and removing defects

122
Summary VV
  • Verification
  • Are we building the system right?
  • Validation
  • Are we building the right system?
  • Testing is part of VV
  • VV is more than testing...
  • VV is plans, testing, reviews, methods,
    standards, and measurement

123
Testing Principles
  • The necessary part of a test case is a definition
    of the expected output or result
  • the eye often sees what it wants to see
  • Programmers should avoid testing their own code
  • Organizations should not test their own programs
  • Thoroughly inspect the results of each test

124
Testing Principles
  • Test invalid as well as valid conditions
  • The portability of errors in a section code is
    proportional to the number of errors already
    found there

125
Testing Principles
  • Tests should be traceable to customer
    requirements
  • Tests should be planned before testing begins
  • The Pareto principle applies - 80 of all errors
    is in 20 of the code
  • Begin small, scale up
  • Exhaustive testing is not possible
  • The best testing is done by a 3rd party

126
Guidelines
  • Testing capabilities is more important than
    testing components
  • users have a job to do tests should focus on
    things that interfere with getting the job done,
    not minor irritations
  • Testing old capabilities is more important then
    testing new features
  • Testing typical situations is more important than
    testing boundary conditions

127
System Testing
  • Ian Summerville

128
System Testing
  • Testing the system as a whole to validate that it
    meets its specification and the objectives of its
    users

129
Development testing
  • Hardware and software components should be
    tested
  • as they are developed
  • as sub-systems are created.
  • These testing activities include
  • Unit testing.
  • Module testing
  • Sub-system testing

130
Development testing
  • These tests do not cover
  • Interactions between components or sub-systems
    where the interaction causes the system to behave
    in an unexpected way
  • The emergent properties of the system

131
System testing
  • Testing the system as a whole instead of
    individual system components
  • Integration testing
  • As the system is integrated, it is tested by the
    system developer for specification compliance
  • Stress testing
  • The behavior of the system is tested under
    conditions of load

132
System testing
  • Acceptance testing
  • The system is tested by the customer to check if
    it conforms to the terms of the development
    contract
  • System testing reveals errors which were
    undiscovered during testing at the component level

133
System Test Flow
Requirements specification
System design
System specification
Detailed design
System Integration test plan
Sub-system Integration test plan
Acceptance test plan
Unit code and test
System Integration test
Sub-system Integration test
Acceptance test
Service
134
Integration testing
  • Concerned with testing the system as it is
    integrated from its components
  • Integration testing is normally the most
    expensive activity in the systems integration
    process

135
Integration testing
  • Should focus on
  • Interface testing where the interactions between
    sub-systems and components are tested
  • Property testing where system properties such as
    reliability, performance and usability are tested

136
Integration Test Planning
  • Integration testing is complex and time-consuming
    and planning of the process is essential
  • The larger the system, the earlier this planning
    must start and the more extensive it must be
  • Integration test planning may be the
    responsibility of a separate IVV (verification
    and validation) team
  • or a group which is separate from the development
    team

137
Test planning activities
  • Identify possible system tests using the
    requirements document
  • Prepare test cases and test scenarios to run
    these system tests
  • Plan the development, if required, of tools such
    as simulators to support system testing
  • Prepare, if necessary, operational profiles for
    the system
  • Schedule the testing activities and estimate
    testing costs

138
Interface Testing
  • Within a system there may be literally hundreds
    of different interfaces of different types.
    Testing these is a major problem.
  • Interface tests should not be concerned with the
    internal operation of the sub-system although
    they can highlight problems which were not
    discovered when the sub-system was tested as an
    independent entity.

139
Two levels of interface testing
  • Interface testing during development when the
    developers test what they understand to be the
    sub-system interface
  • Interface testing during integration where the
    interface, as understood by the users of the
    subsystem, is tested.

140
Two levels of interface testing
  • What developers understand as the system
    interface and what users understand by this are
    not always the same thing.

141
Interface Testing
A
B
C
142
Interface Problems
  • Interface problems often arise because of poor
    communications within the development team or
    because of poor change management procedures
  • Typically, an interface definition is agreed
    but, for good reasons, this has to be chanegd
    during development

143
Interface Problems
  • To allow other parts of the system to cope with
    this change, they must be informed of it
  • It is very common for changes to be made and for
    potential users of the interface to be unaware of
    these changes
  • problems arise which emerge during interface
    testing

144
What is an interface?
  • An agreed mechanism for communication between
    different parts of the system
  • System interface classes
  • Hardware interfaces
  • Involving communicating hardware units
  • Hardware/software interfaces
  • Involving the interaction between hardware and
    software

145
What is an interface?
  • Software interfaces
  • Involving communicating software components or
    sub-systems
  • Human/computer interfaces
  • Involving the interaction of people and the
    system
  • Human interfaces
  • Involving the interactions between people in the
    process

146
Hardware interfaces
  • Physical-level interfaces
  • Concerned with the physical connection of
    different parts of the system e.g. plug/socket
    compatibility, physical space utilization, wiring
    correctness, etc.
  • Electrical-level interfaces
  • Concerned with the electrical/electronic
    compatibility of hardware units i.e. can a signal
    produced by one unit be processed by another unit

147
Hardware interfaces
  • Protocol-level interfaces
  • Concerned with the format of the signals
    communicated between hardware units

148
Software interfaces
  • Parameter interfaces
  • Software units communicate by setting pre-defined
    parameters
  • Shared memory interfaces
  • Software units communicate through a shared area
    of memory
  • Software/hardware interfaces are usually of this
    type

149
Software interfaces
  • Procedural interfaces
  • Software units communicate by calling pre-defined
    procedures
  • Message passing interfaces
  • Software units communicate by passing messages to
    each other

150
Parameter Interfaces
Subsystem 2
Subsystem 1
151
Shared Memory Interfaces
SS1
SS2
SS3
Shared memory area
152
Procedural Interfaces
Subsystem 1
Subsystem 2
153
Message Passing Interfaces
Subsystem 1
Subsystem 2
154
Interface errors
  • Interface misuse
  • A calling component calls another component and
    makes an error in its use of its interface e.g.
    parameters in the wrong order
  • Interface misunderstanding
  • A calling component embeds assumptions about the
    behavior of the called component which are
    incorrect

155
Interface errors
  • Timing errors
  • The calling and called component operate at
    different speeds and out-of-date information is
    accessed

156
Stress testing
  • Exercises the system beyond its maximum design
    load
  • The argument for stress testing is that system
    failures are most likely to show themselves at
    the extremes of the systems behavior
  • Tests failure behavior
  • When a system is overloaded, it should degrade
    gracefully rather than fail catastrophically

157
Stress testing
  • Particularly relevant to distributed systems
  • As the load on the system increases, so too does
    the network traffic. At some stage, the network
    is likely to become swamped and no useful work
    can be done

158
Acceptance testing
  • The process of demonstrating to the customer that
    the system is acceptable
  • Based on real data drawn from customer sources.
    The system must process this data as required by
    the customer if it is to be acceptable

159
Acceptance testing
  • Generally carried out by customer and system
    developer together
  • May be carried out before or after a system has
    been installed

160
Performance testing
  • Concerned with checking that the system meets its
    performance requirements
  • Number of transactions processed per second
  • Response time to user interaction
  • Time to complete specified operations

161
Performance testing
  • Generally requires some logging software to be
    associated with the system to measure its
    performance
  • May be carried out in conjunction with stress
    testing using simulators developed for stress
    testing

162
Reliability testing
  • The system is presented with a large number of
    typical inputs and its response to these inputs
    is observed
  • The reliability of the system is based on the
    number of incorrect outputs which are generated
    in response to correct inputs
  • The profile of the inputs (the operational
    profile) must match the real input probabilities
    if the reliability estimate is to be valid

163
Security testing
  • Security testing is concerned with checking that
    the system and its data are protected from
    accidental or malicious damage
  • Unlike other types of testing, this cannot really
    be tested by planning system tests. The system
    must be secure against unanticipated as well as
    anticipated attacks

164
Security testing
  • Security testing may be carried out by inviting
    people to try to penetrate the system through
    security loopholes

165
Some Costly and Famous Software Failures
166
Mariner 1 Venus probe loses its way 1962
167
Mariner 1
  • A probe launched from Cape Canaveral was set to
    go to Venus
  • After takeoff, the unmanned rocket carrying the
    probe went off course, and NASA had to blow up
    the rocket to avoid endangering lives on earth
  • NASA later attributed the error to a faulty line
    of Fortran code

168
Mariner 1
  • ... a hyphen had been dropped from the guidance
    program loaded aboard the computer, allowing the
    flawed signals to command the rocket to veer left
    and nose down
  • The vehicle cost more than 80 million, prompting
    Arthur C. Clarke to refer to the mission as "the
    most expensive hyphen in history."

169
Therac 25 Radiation Machine
170
Radiation machine kills four 1985 to 1987
  • Faulty software in a Therac-25 radiation-treatment
    machine made by Atomic Energy of Canada Limited
    (AECL) resulted in several cancer patients
    receiving lethal overdoses of radiation
  • Four patients died

171
Radiation machine kills four 1985 to 1987
  • A lesson to be learned from the Therac-25 story
    is that focusing on particular software bugs is
    not the way to make a safe system,
  • "The basic mistakes here involved poor software
    engineering practices and building a machine that
    relies on the software for safe operation."

172
ATT long distance service fails
173
ATT long distance service fails 1990
  • Switching errors in ATT's call-handling
    computers caused the company's long-distance
    network to go down for nine hours, the worst of
    several telephone outages in the history of the
    system
  • The meltdown affected thousands of services and
    was eventually traced to a single faulty line of
    code

174
Patriot missile
175
Patriot missile misses 1991
  • The U.S. Patriot missile's battery was designed
    to head off Iraqi Scuds during the Gulf War
  • System also failed to track several incoming Scud
    missiles, including one that killed 28 U.S.
    soldiers in a barracks in Dhahran, Saudi Arabia

176
Patriot missile misses 1991
  • The problem stemmed from a software error that
    put the tracking system off by 0.34 of a second
  • System was originally supposed to be operated for
    only 14 hours at a time
  • In the Dhahran attack, the missile battery had
    been on for 100 hours
  • errors in the system's clock accumulated to the
    point that the tracking system no longer
    functioned

177
Pentium chip
178
Pentium chip fails math test 1994
  • Pentium chip gave incorrect answers to certain
    complex equations
  • bug occurred rarely and affected only a tiny
    percentage of Intel's customers
  • Intel offered to replace the affected chips,
    which cost the company 450 million
  • Intel then started publishing a list of known
    "errata," or bugs, for all of its chips

179
New Denver airport
180
New Denver airport misses its opening 1995
  • The Denver International Airport was intended to
    be a state-of-the-art airport, with a complex,
    computerized baggage-handling system and 5,300
    miles of fiber-optic cabling
  • Bugs in the baggage system caused suitcases to be
    chewed up and drove automated baggage carts into
    walls

181
New Denver airport misses its opening 1995
  • The airport eventually opened 16 months late,
    3.2 billion over budget, and with a mainly
    manual baggage system

182
The millennium bug 2000
  • No need to discuss this !!

183
Ariane 5 Rocket
184
Ariane 5
  • The failure of the Ariane 501 was caused by the
    complete loss of guidance and attitude
    information 37 seconds after start of the main
    engine ignition sequence (30 seconds after lift-
    off)
  • This loss of information was due to specification
    and design errors in the software of the inertial
    reference system

185
Ariane 5
  • The extensive reviews and tests carried out
    during the Ariane 5 Development Programme did not
    include adequate analysis and testing of the
    inertial reference system or of the complete
    flight control system, which could have detected
    the potential failure

186
More on Testing
From Beatty ESC 2002
187
Agenda
  • Introduction
  • Types of software errors
  • Finding errors methods and tools
  • Embedded systems and RT issues
  • Risk management and process

188
Introduction
  • Testing is expensive
  • Testing progress can be hard to predict
  • Embedded systems have different needs
  • Desire for best practices

189
Method
  • Know what you are looking for
  • Learn how to effectively locate problems
  • Plan to succeed manage risk
  • Customize and optimize the process

190
Entomology
  • What are we looking for ?
  • How are bugs introduced?
  • What are their consequences?

191
Entomology Bug Frequency
  • Rare
  • Less common
  • More common
  • Common

192
Entomology Bug severity
  • Non functional doesnt affect object code
  • Low correct problem when convenient
  • High correct as soon as possible
  • Critical change MUST be made
  • Safety related or legal issue
  • Domain Specific !

193
Entomology - Sources
  • Non-implementation error sources
  • Specifications
  • Design
  • Hardware
  • Compiler errors
  • Frequency common 45 to 65
  • Severity Non-functional to critical

194
Entomology - Sources
  • Poor specifications and designs are often
  • Missing
  • Ambiguous
  • Wrong
  • Needlessly complex
  • Contradictory

Testing can fix these problems !
195
Entomology - Sources
  • Implementation error sources
  • Algorithmic/processing bugs
  • Data bugs
  • Real-time bugs
  • System bugs
  • Other bugs

Bugs may fit in more than one category !
196
Entomology Algorithm Bugs
  • Parameter passing
  • Common only in complex invocations
  • Severity varies
  • Return codes
  • Common only in complex functions or libraries
  • Reentrance problem
  • Less common
  • Critical

197
Entomology Algorithm Bugs
  • Incorrect control flow
  • Common
  • Severity varies
  • Logic/math/processing error
  • Common
  • High
  • Off by 1
  • Common
  • Varies, but typically high

198
Example of logic error
If (( this AND that ) OR ( that AND other ) AND
NOT ( this AND other ) AND NOT ( other OR NOT
another ))
Boolean operations and mathematical calculations
can be easily misunderstood In complicated
algorithms!
199
Example of off by 1
for ( x 0, x lt 10 x)
This will execute 11 times, not 10!
for ( x array_min x lt array_max x)
If the intention is to set x to array_max on the
last pass through the loop, then this is in error!
Be careful when switching between 1
based language (Pascal, Fortran) to zero based (C)
200
Entomology Algorithm bugs
  • Math underflow/overflow
  • Common with integer or fixed point math
  • High severity
  • Be careful when switching between floating point
    and fixed point processors

201
Entomology Data bugs
  • Improper variable initialization
  • Less common
  • Varies typically low
  • Variable scope error
  • Less common
  • Low to high

202
Example - Uninitialized data
int some_function ( int some_param ) int
j if (some_param gt 0) for ( j0 jlt3
j) / iterate through some process
/ else if (some_param lt
-10) some_param j / j is
uninitialized / return some_param
return 0
203
Entomology Data bugs
  • Data synchronization error
  • Less common
  • Varies typically high

204
Example synchronized data
struct state / an interrupt will trigger
/ GEAR_TYPE gear / sending snapshot in a
message / U16 speed U16 speed_limit U8
last_error_code snapshot snapshot.speed
new_speed / somewhere in code
/ snapshot.gear new gear / somewhere
else / snapshot.speed_limit speed_limit_tb
gear
Interrupt splitting these two would be bad
205
Entomology Data bugs
  • Improper data usage
  • Common
  • Varies
  • Incorrect flag usage
  • Common when hard-coded constants used
  • varies

206
Example mixed math error
unsigned int a 5 int b -10 / somewhere
in code / if ( a b gt 0 )
ab is not evaluated as 5 ! the signed int b is
converted to an unsigned int
207
Entomology Data bugs
  • Data/range overflow/underflow
  • Common in asm and 16 bit micro
  • Low to critical
  • Signed/unsigned data error
  • Common in asm and fixed point math
  • High to critical
  • Incorrect conversion/type cast/scaling
  • Common in complex programs
  • Low to critical

208
Entomology Data bugs
  • Pointer error
  • Common
  • High to critical
  • Indexing problem
  • Common
  • High to critical

209
Entomology Real-time bugs
  • Task synchronization
  • Waiting, sequencing, scheduling, race conditions,
    priority inversion
  • Less common
  • Varies
  • Interrupt handling
  • Unexpected interrupts
  • Improper return from interrupt
  • Rare
  • critical

210
Entomology Real-time bugs
  • Interrupt suppression
  • Critical sections
  • Corruption of shared data
  • Interrupt latency
  • Less common
  • critical

211
Entomology System bugs
  • Stack overflow/underflow
  • Pushing, pulling and nesting
  • More common in asm and complex designs
  • Critical
  • Resource sharing problem
  • Less common
  • High to critical
  • Mars pathfinder

212
Entomology System bugs
  • Resource mapping
  • Variable maps, register banks, development maps
  • Less common
  • Critical
  • Instrumentation problem
  • Less common
  • low

213
Entomology System bugs
  • Version control error
  • Common in complex or mismanaged projects
  • High to critical

214
Entomology other bugs
  • Syntax/typing
  • if (ptrNULL) Cutpaste errors
  • More common
  • Varies
  • Interface
  • Common
  • High to critical
  • Missing functionality
  • Common
  • high

215
Entomology other bugs
  • Peripheral register initialization
  • Less common
  • Critical
  • Watchdog servicing
  • Less common
  • Critical
  • Memory allocation/de-allocation
  • Common when using malloc(), free()
  • Low to critical

216
Entomology Review
  • What are you looking for ?
  • How are bugs being introduced ?
  • What are their consequences ?
  • Form your own target list!

217
Finding the hidden errors
  • All methods use these basic techniques
  • Review checking
  • Tests demonstrating
  • Analysis proving

These are all referred to as testing !
218
Testing
  • Organized process of identifying variances
    between actual and specified results
  • Goal zero significant defects

219
Testing axioms
  • All software has bugs
  • Programs cannot be exhaustively tested
  • Cannot prove the absence of all errors
  • Complex systems often behave counter-intuitively
  • Software systems are often brittle

220
Finding spec/design problems
  • Reviews / Inspections / Walkthroughs
  • CASE tools
  • Simulation
  • Prototypes

Still need consistently effective methods !
221
Testing Spec/Design Reviews
  • Can be formal or informal
  • Completeness
  • Consistency
  • Feasibility
  • Testability

222
Testing Evaluating methods
  • Relative costs
  • None
  • Low
  • Moderate
  • High
  • General effectiveness
  • Low
  • Moderate
  • High
  • Very high

223
Testing Code reviews
  • Individual review
  • Effectiveness high
  • Cost Time low, material - none
  • Group inspections
  • Effectiveness very high
  • Cost Time moderate, material - none

224
Testing Code reviews
  • Strengths
  • Early detection of errors
  • Logic problems
  • Math errors
  • Non-testable requirement or paths
  • Weaknesses
  • Individual preparation and experience
  • Focus on details, not big picture
  • Timing and system issues

225
Step by step execution
  • Exercise every line of code or every branch
    condition
  • Look for errors
  • Use simulator, ICE, logic analyzer
  • Effectiveness moderate dependent on tester
  • Cost time is high, material is low or moderate

226
Functional (Black Box)
  • Exercise inputs and examine outputs
  • Test procedures describe expected behavior
  • Subsystems tested and integrated
  • Effectiveness is moderate
  • Cost time is moderate, material varies

Tip where functional testing finds problems look
deeper in that area !
227
Functional (Black Box)
  • Strengths
  • Requirements problems
  • Interfaces
  • Performance issues
  • Most critical/most used features
  • Weaknesses
  • Poor coverage
  • Timing and other problems masked
About PowerShow.com