TQS Teste e Qualidade de Software Software Testing and Quality Test Case Design Black Box Testing - PowerPoint PPT Presentation


PPT – TQS Teste e Qualidade de Software Software Testing and Quality Test Case Design Black Box Testing PowerPoint presentation | free to view - id: b671c-ZDc1Z


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

TQS Teste e Qualidade de Software Software Testing and Quality Test Case Design Black Box Testing


Misperceived product quality. Product quality. Test suite quality. High. Low. High. Low. Many bugs found ... The program prints a message that states whether ... – PowerPoint PPT presentation

Number of Views:133
Avg rating:3.0/5.0
Slides: 55
Provided by: pagina


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: TQS Teste e Qualidade de Software Software Testing and Quality Test Case Design Black Box Testing

TQS - Teste e Qualidade de Software (Software
Testing and Quality) Test Case Design Black
Box Testing
João Pascoal Faria jpf_at_fe.up.pt
  • Introduction
  • Black box testing techniques

Development of test cases
  • Complete testing is impossible
  • ? Testing cannot guarantee the absence of faults
  • ? How to select subset of test cases from all
    possible test cases with a high chance of
    detecting most faults?
  • ? Test case design strategies and techniques
  • Because if we have a good test suite then we
    can have more confidence in the product that
    passes that test suite

Quality attributes of good test cases
  • Capability to find defects
  • Particularly defects with higher risk
  • Risk frequency of failure (manifestation to
    users) impact of failure
  • Cost (of post-release failure) risk
  • Capability to exercise multiple aspects of the
    system under test
  • Reduces the number of test cases required and the
    overall cost
  • Low cost
  • Development specify, design, code
  • Execution
  • Result analysis pass/fail analysis, defect
  • Easy to maintain
  • Reduce whole life-cycle cost
  • Maintenance cost size of test artefacts

Adequacy criteria
  • Criteria to decide if a given test suite is
    adequate, i.e., to give us enough confidence
    that most of the defects are revealed
  • enough and most depend on the product (or
    product component) criticality
  • In practice, reduced to coverage criteria
  • Requirements / specification coverage
  • At least on test case for each requirement
  • Cover all cases described in an informal
  • Cover all statements in a formal specification
  • Model coverage
  • State-transition coverage
  • Cover all states, all transitions, etc.
  • Use-case and scenario coverage
  • At least one test case for each use-case or
  • Code coverage
  • Cover 100 of methods, cover 90 of instructions,

Adequacy criteria
  • Coverage-based testing strategy based on our
  • For normal operational testing
    specification-based, regardless of code coverage
  • For exceptional testing code coverage is an
    important metrics for testing capability
  • Xia Cai and Michael R. Lyu, The Effect of Code
    Coverage on Fault Detection under Different
    Testing Profiles, ICSE 2005 Workshop on
    Advances in Model-Based Software Testing

How do we known what percentage of defects have
been revealed?
  • Goals that require this information
  • Find and fix 99 of the defects
  • Reduce the number of defects to less than 10
    defects / KLOC
  • Difficulty the kind of coverage we can measure
    (statements covered, requirements covered) is not
    the kind of coverage we would like to measure
    (defects found and missed) …
  • Techniques
  • Past experience
  • Mutation testing (defect injection)
  • Static analysis of code samples
  • …

What is confidence?

too expensive or too late
High confidence Low risk
After fixing bugs
Medium confidence Medium risk
Estimated quality level (1 / defect density)
Low confidence High risk
After testing (product above average)
After testing (more thoroughly)
Before testing, based on past experience
Same number of bugs detected!
After testing (product below average)

Precision of estimate (knowledge)
Misperceived product quality
… and think you are here
Many bugs found
Some bugs found
Test suite quality
Very few bugs found
Some bugs found
You may be here …
Product quality
The optimal amount of testing
  • (source "Software Testing", Ron Patton)

Stop criteria
  • Criteria to decide when to stop testing (and
    fixing bugs)
  • I.e., stop test-fix iterations and release the

Stop criteria
Test effort (cost)
Saturation point (stop testing)
Number of bugs found (benefit)
90 of the bugs can be found with 10 of the
test effort
Stop criteria
Test repair effort (cost)
Cost benefit
End of business opportunity
Minimum quality achieved
Value of bugs found and fixed
Stop criteria
  • What bugs should be fixed?
  • If bugs found are 50 of the total number of
    bugs, it makes little difference fixing 95 or
    100 of the bugs found
  • Some bugs are very difficult to fix
  • Difficult to locate (example random bugs)
  • Difficult to correct (example third party
  • Reduce overall risk
  • Fix all high severity bugs
  • Fix (the easiest) 95 of the medium severity bugs
  • Fix (the easiest) 70 of low severity bugs

Stop criteria
Number of bugs
Bugs found
Bugs fixed
Bugs found and not fixed
Improve versus assess
  • Different test techniques for different purposes
  • Find defects and improve the software under test
  • Defect testing
  • Choose test cases that have higher probability of
    finding defects, particularly defects with higher
    risk, and provide better hints on how to improve
    the system
  • Boundary cases
  • Frequent errors
  • Code coverage
  • Stress testing
  • Profiling
  • Focus here
  • Assess software quality
  • Statistical testing
  • Estimate quality metrics (reliability,
    availability, efficiency, usability, …)
  • Test cases that represent typical use cases
    (based on usage model/profile)

Test iterations test to pass and test to fail
  • First test iterations Test-to-pass
  • check if the software fundamentally works
  • with valid inputs
  • without stressing the system
  • Subsequent test iterations Test-to-fail
  • try to "break" the system
  • with valid inputs but at the operational limits
  • with invalid inputs

(source Ron Patton)
Benefits of test automation
  • Automatic (?manual) test execution
  • Requires that test cases are written in some
    executable language
  • No impact on defect finding capability
  • Increases test development costs (coding)
  • May increase maintenance effort/cost ( size of
    test artifacts)
  • Dramatically reduces execution and result
    analysis (pass/fail analysis) costs
  • Testing can be repeated more frequently
  • Pays-off when testing is repeated (test
    iterations, regression testing)
  • Automatic (?manual) test case generation
  • Test goals are provided to a tool
  • Requires a formal specification / model of the
    system (if test outputs are to be generated
    automatically, and not only test inputs)
  • Reduces design costs
  • Much more test cases can be generated (than
  • Inferior capability to find defects (per test
    case), but overall capability may be similar or

Test harness
  • Auxiliary code developed to support testing
  • Test drivers
  • call the target code
  • simulate calling units or a user
  • where test procedures and test cases are coded
    (for automatic test case execution) or a user
    interface is created (for manual test case
  • Test stubs
  • simulate called units
  • simulate modules/units/systems called by the
    target code

Exhaustive, random, point and parameterized
  • Exhaustive testing
  • In general, impossible or infeasible
  • But some parts/features of the system might be
    tested this way
  • Random testing
  • Useful to generate many test cases, particularly
    if the test oracle is automated
  • But low probability to exercise boundary cases
  • Repeatability is important
  • Point testing
  • With very specific, well designed, test data
  • Intelligent testing
  • Focus here
  • Parameterized testing
  • Reuse the same test procedure for different test
    data (passed as parameter)
  • Tests as (better) specifications

Test case design strategies and techniques
Knowledge sources
Tester's View
Techniques / Methods
Equivalence class partitioning Boundary value
analysis Cause effect graphing Error
guessing State-transition testing Random
testing Scenario-based testing
Requirements document User manual Specifications M
odels Domain knowledge Defect analysis
data Intuition Experience Heuristics
Black-box testing (not code-based) (sometimes
called functional testing)
Control flow testing/coverage - Statement
coverage - Branch (or decision) coverage -
Condition coverage - Branch and condition
coverage - Modified condition/decision coverage -
Multiple condition coverage - Independent path
coverage - Path coverage Data flow
testing/coverage Class testing/coverage Mutation
Program code Control flow graphs Data flow
graphs Cyclomatic complexity High-level
design Detailed design
White-box testing (also called code-based or
structural testing)
(adapted from I. Burnstein, pg.65)
  • Introduction
  • Black box testing techniques

Equivalence class partitioning
  • Divide all possible inputs into classes
    (partitions) such that
  • There is a finite number of input equivalence
  • You may reasonably assume that
  • the program behaves analogously for inputs in the
    same class
  • one test with a representative value from a class
    is sufficient
  • if the representative detects a defect then other
    class members would detect the same defect
  • (Can also be applied to outputs)

Equivalence class partitioning
  • Strategy
  • Identify (valid and invalid) input equivalence
  • Based on (test) conditions on inputs/outputs in
  • Based on heuristics and experience
  • input x in 1..10 ? classes x ? 1, 1 ?
    x ? 10, x ? 10
  • enumeration A, B, C" ? classes A, B, C,
  • "input integer n" ? classes n not an
    integer, n?min, min?n?0, 0?n?max, n?max
  • ……
  • Define one (or a couple of) test cases for each
  • Test cases that cover valid classes (1 test case
    for 1 or more valid classes)
  • Test cases that cover at most one invalid class
    (1 test case for 1 invalid class)
  • Usually useful to test for 0 / null / empty and
    other special cases

Equivalence class partitioning Example 1
  • Test a function for calculation of absolute value
    of an integer x
  • Equivalence classes

Criteria Valid eq. classes Invalid eq. classes
nr of inputs 1 0 , gt 1 input
type integer non-integer particular x
lt 0 , gt 0
(1) (2) (3)
(4) (5) (6)
  • Test cases

x -10 (1,4,6) x 100 (1,4,7)
x - (2) x 10 20 (3) x
XYZ (5)
Equivalence class partitioning Example 2
  • Test a program that computes the sum of the first
    N integers as long as this sum is less than
    maxint. Otherwise an error should be reported.
    If N is negative, then it takes the absolute
    value N.
  • Formally Given integer inputs N and maxint
    compute result result if this lt
    maxint, error otherwise

Equivalence class partitioning Example 2
  • Equivalence classes
  • Condition Valid eq. classes Invalid eq. classes
  • nr of inputs 2 lt 2, gt 2
  • type of input int int int
    no-int, no-int int
  • abs(N) N ? 0, N ? 0
  • maxint ? k ? maxint,
    ? k gt maxint
  • Test Cases maxint N result
  • Valid 100 10 55 100 -10 55 10 10 er
    ror Invalid 10 - error 10 20
    30 error XYZ 10 error 100 9.1E4 error

Boundary value analysis
  • Based on experience / heuristics
  • Testing boundary conditions of equivalence
    classes is more effective, i.e. values directly
    on, above, and beneath edges of classes
  • If a system behaves correctly at boundary values,
    than it probably will work correctly at "middle"
  • Choose input boundary values as tests in input
    classes instead of, or additional to arbitrary
  • Choose also inputs that invoke output boundary
    values (values on the boundary of output
  • Example strategy as extension of equivalence
    class partitioning
  • choose one (or more) arbitrary value(s) in each
    eq. class
  • choose values exactly on lower and upper
    boundaries of eq. class
  • choose values immediately below and above each
    boundary (if applicable)

Boundary value analysis
  • Bugs lurk in corners and congregate at
  • Boris Beizer, "Software testing techniques"

"Os bugs escondem-se nos cantos e reúnem-se nas
Boundary value analysis Example 1
  • Test a function for calculation of absolute value
    of an integer
  • Valid equivalence classes
  • Condition Valid eq. classes Invalid eq. Classes
  • particular abs lt 0, gt 0
  • Test cases
  • class x lt 0, arbitrary value x -10
  • class x gt 0, arbitrary value x 100
  • class x gt 0, on boundary x 0
  • classes x lt 0, on boundary x -1

Boundary value analysis A self-assessment test 1
  • A program reads three integer values. The three
    values are interpreted as representing the
    lengths of the sides of a triangle. The program
    prints a message that states whether the triangle
    is scalene (all lengths are different), isosceles
    (two lengths are equal), or equilateral (all
    lengths are equal).

Write a set of test cases to test this program.
Inputs l1, l2, l3 , integer, li gt 0, li lt lj
lk Output error, scalene, isosceles or
Boundary value analysis A self-assessment test 1
Test cases for
valid inputs
invalid inputs
  • valid scalene triangle ?
  • valid equilateral triangle ?
  • valid isosceles triangle ?
  • 3 permutations of previous ?
  • side 0 ?
  • negative side ?
  • one side is sum of others ?
  • 3 permutations of previous ?
  • one side larger than sum of others ?
  • 3 permutations of previous ?
  • all sides 0 ?
  • non-integer input ?
  • wrong number of values ?

Bugs lurk in corners and congregate at
Boundary value analysis Example 2
  • Given inputs maxint and N compute result
    result if this lt maxint, error
  • Valid equivalence classes condition valid eq.
    classes boundary
    values. abs(N) N ? 0, N ? 0 N (-2),
    -1, 0, 1 maxint ? k ? maxint,
    ? k maxint-1,
    ? k gt maxint
    maxint, maxint1,

Boundary value analysis Example 2
  • Test Cases
  • maxint N result maxint N result
  • 55 10 55 100 0 0 54 10 error 100 -1 1
    56 10 55 100 1 1 0 0 0 … … …
  • How to combine the boundary conditions of
    different inputs ? Take all possible boundary
    combinations ? This may blow up ……

maxint 012...N
N 0
Boundary value analysis Example 3 search
routine specification
procedure Search (Key ELEM T ELEM_ARRAY
Pre-condition -- the array has at least one
element TFIRST lt TLAST Post-condition --
the element is found and is referenced by L (
Found and T (L) Key) or -- the element is
not in the array ( not Found and not
(exists i, TFIRST gt i lt TLAST, T (i) Key ))
(source Ian Sommerville)
Boundary value analysis Example 3 - input
  • P1 - Inputs which conform to the pre-conditions
  • array with 1 value (boundary)
  • array with more than one value (different size
    from test case to test case)
  • P2 - Inputs where a pre-condition does not hold
  • array with zero length
  • P3 - Inputs where the key element is a member of
    the array
  • first, last and middle positions in different
    test cases
  • P4 - Inputs where the key element is not a member
    of the array

Boundary value analysis Example 3 test cases
(valid cases only)
Cause-effect graphing
  • Black-box technique to analyze combinations of
    input conditions
  • Identify causes and effects in
    specification ? ?
    inputs / outputs / initial state
    final state conditions
  • Make Boolean Graph linking causes and effects
  • Annotate impossible combinations of causes and
  • Develop decision table from graph with a
    particular combination of inputs and outputs in
    each column
  • Transform each column into a test case

Cause-effect graphing Example 2
  • ? k ? maxint
  • ? k ? maxint
  • N ? 0
  • N ? 0

? k error
Boolean graph
bastava segundo partição em classes de
causes ? k ? maxint 1 1 0 0 (inputs) ? k ?
maxint 0 0 1 1 N ? 0 1 0 1 0 N ?
0 0 1 0 1 effects ? k 1 1 0 0 (outputs) error
0 0 1 1
decision table ("truth table")
Cause-effect graphing
  • Systematic method for generating test cases
    representing combinations of conditions
  • Differently from eq. class partitioning, we
    define a test case for each possible combination
    of conditions
  • Drawback Combinatorial explosion of number of
    possible combinations

Independent effect testing
  • With multiple input parameters
  • Show that each input parameter affects the
    outcome (the value of at least one output
    parameter) independently of other input
  • Need at most two test cases for each parameter,
    with different outcomes, different values of that
    parameter, and equal values of all the other
  • Avoids combinatorial explosion of number of test

Domain partitioning
  • Partition the domain of each input or output
    parameter into equivalence classes
  • Boolean type true, false
  • Enumeration type value1, value2, … (if few
  • Integer type lt0, 0, gt0
  • Sequence type (array, string, list) empty,
    not-empty, with/without repetitions, …
  • Set type empty, not-empty
  • Etc.
  • Special case of equivalence class partitioning
  • General case analyze combinations of parameters

Error guessing
  • Just guess where the errors are ……
  • Intuition and experience of tester
  • Ad hoc, not really a technique
  • But can be quite effective
  • Strategy
  • Make a list of possible errors or error-prone
    situations (often related to boundary conditions)
  • Write test cases based on this list

Risk based testing
  • More sophisticated error guessing
  • Try to identify critical parts of program ( high
    risk code sections )
  • parts with unclear specifications
  • developed by junior programmer while his wife was
    pregnant ……
  • complex code (white box?) measure code
    complexity - tools available (McGabe,
  • High-risk code will be more thoroughly tested (
    or be rewritten immediately ……)

Testing for race conditions
  • Also called bad timing and concurrency problems
  • Problems that occur in multitasking systems (with
    multiple threads or processes)
  • A kind of boundary analysis related to the
    dynamic views of a system (state-transition view
    and process-communication view)
  • Examples of situations that may expose race
  • problems with shared resources
  • saving and loading the same document at the same
    time with different programs
  • sharing the same printer, communications port or
    other peripheral
  • using different programs (or instances of a
    program) to simultaneously access a common
  • problems with interruptions
  • pressing keys or sending mouse clicks while the
    software is loading or changing states
  • other problems
  • shutting down or starting two or more instances
    of the software at the same time
  • Knowledge used dynamic models (state-transition
    models, process models)
  • (source Ron Patton)

Random testing
  • Input values are (pseudo) randomly generated
  • (-) Need some automatic way to check the outputs
    (for functional / correctness testing)
  • By comparing the actual output with the output
    produced by a "trusted" implementation
  • By checking the result with some kind of
    procedure or expression
  • Some times it's much easier to check a result
    than to actually computing it
  • Example sorting - O(n log n) to perform O(n) to
    check (final array is sorted and has same
    elements than initial array)
  • () Many test cases may be generated
  • () Repeatable () pseudo-random generators
    produce the same sequence of values when started
    with the same initial value
  • () essential to check if a bug was corrected
  • (-) May not cover special cases that are
    discovered by "manual" techniques
  • Combine with "manual" generation of test cases
    for boundary values
  • () Particularly adequate for performance testing
    (it's not necessary to check the correctness of

Requirements based testing
  • You have a list (or tree) of requirements (or
    features or properties)
  • Define at least one test case for each
  • Build and maintain a (tests to requirements)
    traceability matrix
  • Particularly adequate for system and acceptance
    testing, but applicable in other situations
  • Hints
  • Write test cases as specifications by examples
  • Write distinctive test cases (examples)
  • Not enough Requirement1 or Requirement2 gt
  • Test separately and in combination
  • Write positive (examples in favor) and
    negative (examples against) test cases

Model based testing
  • Model visual model
  • Semi-formal specifications
  • Behavioral UML models
  • Use case models
  • Interaction models (sequence diagrams,
    collaborations diagrams)
  • State machine (state-transition) models
  • Activity models
  • Same techniques as white box
  • Can be used to generate test cases in a more or
    less automated way

State-transition testing
  • Construct a state-transition model (state machine
    view) of the item to be tested (from the
    perspective of a user / client)
  • E.g., with a state diagram in UML
  • Define test cases to exercise all states and all
    transitions between states
  • Usually, not all possible paths (sequences of
    states and transitions), because of combinatorial
  • Each test case describes a sequence of inputs and
    outputs (including input and output states), and
    may cover several states and transitions
  • Also test to fail with unexpected inputs for a
    particular state
  • Particularly useful for testing user interfaces
  • state a particular form/screen/page or a
    particular mode (inspect, insert, modify, delete
    draw line mode, brush mode, etc.)
  • transition navigation between
    forms/screens/pages or transition between modes
  • Also useful to test object oriented software
  • Particularly useful when a state-transition model
    already exists as part of the specification

Use case and scenario testing
  • Particularly adequate for system and integration
  • Use cases capture functional requirements
  • Each use case is described by one or more normal
    flow of events and zero or more exceptional flow
    of events (also called scenarios)
  • Define at least one test case for each scenario
  • Build and maintain a (tests to use cases)
    traceability matrix
  • Example library (MFES)

Formal specification based testing
  • Formal specification formal model
  • Non executable formal specifications
  • Constraint language
  • Operations pre/post conditions (restrictions/effec
  • Can be expressed in OCL Object Constraint
  • Post conditions can be used to check outcomes
    test oracle
  • Executable formal specifications
  • Action language
  • Executable test oracle for conformance testing
  • Example Colocação de professores

Black box testing which One ?
  • Black box testing techniques
  • Equivalence class partitioning
  • Boundary value analysis
  • Cause-effect graphing
  • Error guessing
  • …………
  • Which one to use ?
  • None of them is complete
  • All are based on some kind of heuristics
  • They are complementary

Black box testing which one ?
  • Always use a combination of techniques
  • When a formal specification is available try to
    use it
  • Identify valid and invalid input equivalence
  • Identify output equivalence classes
  • Apply boundary value analysis on valid
    equivalence classes
  • Guess about possible errors
  • Cause-effect graphing for linking inputs and

References and further reading
  • Practical Software Testing, Ilene Burnstein,
    Springer-Verlag, 2003
  • Software Testing, Ron Patton, SAMS, 2001
  • The Art of Software Testing, Glenford J. Myers,
    Wiley Sons, 1979 (Chapter 4 - Test Case
  • Classical
  • Software testing techniques, Boris Beizer, Van
    Nostrand Reinhold, 2nd Ed, 1990
  • Bible
  • Testing Computer Software,  2nd Edition, Cem
    Kaner, Jack Falk, Hung Nguyen, John Wiley Sons,
  • Practical, black box only
  • Software Engineering, Ian Sommerville,
    6th Edition, Addison-Wesley, 2000
  • http//www.swebok.org/
  • Guide to the Software Engineering Body of
    Knowledge (SWEBOK), IEEE Computer Society
About PowerShow.com