Static Techniques (on code) - PowerPoint PPT Presentation


PPT – Static Techniques (on code) PowerPoint presentation | free to view - id: b6782-ZDc1Z


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Static Techniques (on code)


possibly used for very critical parts of code or in high risk software! ... They are usually related to other quality attributes than correctness. ... – PowerPoint PPT presentation

Number of Views:152
Avg rating:3.0/5.0
Slides: 106
Provided by: beriogi


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Static Techniques (on code)

Static Techniques (on code)
  • (Static Analysis of code)
  • performed on the code without executing the code

Categories of Static Techniques
  • Manual
  • (Semi) Mechanised

Static techniques vs. code testing
  • Code testing tries to characterize set of
    executions throughout one test case --- (minimal)
    coverage (of input similar executions by using
    classes of input data, of paths, others) is the
    most important issue
  • Static techniques characterize once set of
    executions thats the reason to call qualify
    them as static techniques
  • Static techniques are usually used within a
    verification activity (i.e they may come before
  • Static techniques and testing have complementary
    advantages and disadvantages additionally, some
    static techniques during the testing to support
    the test case design

Informal analysis techniquesCode walkthroughs
  • Recommended prescriptions
  • Small number of people (three to five)
  • Participants receive written documentation from
    the designer few days before the meeting
  • Predefined duration of meeting (few hours)
  • Focus on the discovery of defects, not on fixing
  • Participants designer, moderator, and a
  • Foster cooperation no evaluation of people
  • Experience shows that most defects are discovered
    by the designer during the presentation, while
    trying to explain the design to other people.

Informal analysis techniquesCode inspection
  • A reading code technique aiming at defect
  • Based on checklist (also called defect-guessing),
  • use of uninitialized variables
  • jumps into loops
  • nonterminating loops
  • array indexes out of bounds

Defect Guessing
  • From intuition and experience, enumerate a list
    of possible defects or defect prone situations
  • Defect guessing can also be used to write test
    cases to expose those defect

Defect Guessing Esempio
  • Nel caso di array o stringhe, ogni indice è
    compreso nei limiti della dimensione
  • Ci si riferisce ad una variabile non
  • Per i riferimenti attraverso puntatore/riferimento
    , la corrispondente area di memoria è stata
    allocata (dangling reference problem)?
  • Una variabile (eventualmente riferita tramite
    puntatore) ha tipo diverso da quello usato dal
  • Esistono variabili con nome simile (pratica

Defect Guessing Esempio
  • I calcoli coinvolgono tipi diversi e
    inconsistenti (ad es., stringhe e interi)?
  • Esistono delle inconsistenze nei calcoli misti
    (ad es., interi e reali)?
  • Esistono dei calcoli che coinvolgono tipi
    compatibili ma di precisione differente?
  • In unassegnazione (xexp, xexp), il valore
    sinistro ha rappresentazione meno precisa del
    valore destro?
  • È possibile una condizione di overflow o
    underflow (ad esempio nei calcoli intermedi)?
  • Divisione per zero?

Defect Guessing Esempio
  • Nelle espressioni che contengono vari operatori
    aritmetici, le assunzioni sullordine di
    valutazione e di precedenza degli operatori sono
  • Il valore di una variabile esce dallintervallo
    ragionevole? (ad es., il peso dovrebbe essere
    positivo, )
  • Ci sono usi sbagliati dellaritmetica fra interi,
    in particolare delle divisioni?
  • Stiamo prendendo in considerazione adeguatamente
    la precisione finita dei reali?

Defect Guessing Esempio
  • Gli operatori di confronto sono usati
  • Le espressioni booleane sono corrette (uso
    appropriato di and, or, not)?
  • Nelle espressioni che contengono vari operatori
    booleani, le assunzioni sullordine di
    valutazione e di precedenza degli operatori sono
  • Gli operandi di unespressione booleana sono
  • La maniera in cui viene valutata lespressione
    booleana ha impatto sul significato
    dellespressione stessa (pratica pericolosa)?

Correctness proofs
A program and its specification (Hoare notation)
true begin read (a) read (b) x a
b write (x) end output input1 input2
proof by backwards substitution
Proof rules
Notation for If Claim 1 and Claim 2 have been
proven, one can deduce Claim3
Claim1, Claim2 Claim3
Proof rules for a language
F1S1F2, F2S2F3 F1S1S2F3
Pre and cond S1 Post,Pre and not cond S2
Post Pre if cond then S1 else S 2 end if
I and cond S I I while cond loop S end
loop I and not cond
I is called the loop invariant
Correctness proof
  • Partial correctness proof
  • validity of Pre program Post guarantees that
    if the Pre holds before the execution of program,
    and if the program ever terminates, then Post
    will be achieved
  • Total correctness proof
  • Pre guarantees program termination and the truth
    of Post
  • These problems are undecidable!!!

input1 gt 0 and input2 gt 0 begin read (input1)
read (input2) xinput1 yinput2 div
0 while x gt y loop div div 1 x x
- y end loop write (div) write
(x) end input1 div input2 x
Discovery of loop invariant
  • Difficult and creative because invariants cannot
    be constructed automatically
  • In the previous example
  • input1 div y x and yinput2

Combining correctness proofs
  • We can prove correctness of operations (e.g.
    operations on a class)
  • Then use the result of each single proof to make
    proof for complex modules containing these
    operations or complex combinations of these

module TABLE exports type Table_Type (max_size
NATURAL) ? no more than max_size entries may
be stored in a table user modules must
guarantee this procedure Insert (Table in out
TableType ELEMENT in ElementType) procedur
e Delete (Table in out TableType ELEMENT in
ElementType) function Size (Table in
Table_Type) return NATURAL provides the current
size of a table end TABLE
Having proved these
true Delete (Table, Element) Element ?
Table Size (Table) lt max_size Insert (Table,
Element) Element ? Table
We can then prove properties of programs using
tables For example, that after executing the
sequence Insert(T, x) Delete(T, x) x is not
present in T
An assessment ofcorrectness proofs
  • Still not used in practice
  • However
  • possibly used for very critical parts of code or
    in high risk software!
  • assertions (any intermediate property) may be the
    basis for a systematic way of inserting runtime
    checks (instead of checking values of variables)
  • proofs may become more practical as more powerful
    support tools are developed
  • knowledge of correctness theory helps programmers
    being rigorous
  • well written post conditions can be used to
    design test cases

Symbolic execution
  • Can be viewed as a middle way between testing and
    pure verification (but it is anyway a
    verification technique)
  • Executes the program on symbolic values (symbolic
  • One symbolic execution corresponds to many usual
    program executions

Example (1)
Consider executing the following program with
xX, yY, aA (symbolic binding) x y
2 if x gt a then a a 2 else y x
3 end if x x a y
Build the control graph!
  • When control reaches decisions, in general,
    symbolic values do not allow to select a branch
  • One can choose a branch, and record the choice in
    a path condition
  • Result
  • lt lta A, y Y 5, x 2 Y A 7gt
  • lt1, 3, 4gt , Y 2 lt A gt

symbolic binding
execution path
path condition
Symbolic execution rules (1)
symbolic state ltsymbolic_binding,
execution_path, path_conditiongt
  • read (x)
  • removes any existing binding for x and adds
    binding x X, where X is a newly introduced
  • write (expression)
  • output(n) computed_symbolic_value_expression (n
    counter initialized to 1 and automatically
    incremented after each write statement)

Symbolic execution rules (2)
  • x expression
  • construct symbolic value of expression, SV
    replace existing binding of x with xSV
  • After execution of a statement of a path that
    corresponds to an edge of control graph, append
    the edge to execution path

Symbolic execution rules (3)
  • if cond then S1 else S2 endif
  • while cond loopendloop
  • condition is symbolically evaluated
  • eval(cond)
  • If it is possible to state eval(cond) ? true or
    eval(cond) ? false then execution proceeds by
    following the appropriate branch
  • otherwise, make a choice of true or false, and
    conjoin eval(cond) (resp., not eval(cond) to
    the path condition

Symbolic execution and testing
  • The path condition describes all data are
    required for the program execution follows the
    execution path
  • Usage of symbolic execution for testing
  • identify one execution path (I.e. a sequence of
    arrows in a control graph)
  • symbolically execute the execution path (if
  • chose data that satisfy the path condition
  • These data allow to execute that path and
    therefore are a test case for the path.

Example (1)
found false counter 1 while (not found)
and counter lt number_of_items loop if table
(counter) desired_element then found
true end if counter counter 1 end
loop if found then write ("the desired
element exists in the table") else write
("the desired element does not exist in
the table") end if
Example (2)
1,2,3,5,6,2,4 is not possible!
Why so many approaches to testing and analysis?
  • Testing versus static techniques
  • Formal versus informal techniques
  • White-box versus black-box techniques
  • Fully mechanised vs. semi-mechanised techniques
    (for undecidable properties)
  • view all these as complementary

OO Unit Code Defect Testing
Test Case Design Objectives
  • Coverage is the always the key point (what things
    are covered by the test cases).
  • Minimality of this coverage is the other key
    point (do not write two distinct test cases for
    discovering same defects).

Some techniques for making test cases
  • General defect testing
  • The tester looks for plausible defects (i.e.,
    aspects of the implementation of the system that
    may result in defects). To determine whether
    these defects exist, test cases are designed to
    exercise the code.
  • Specialized defect testing Class Testing and
    Class Hierarchy
  • Inheritance does not obviate the need for
    thorough testing of all derived classes. In fact,
    it can actually complicate the testing process.
  • Scenario-Based Test Design (defect but also
    acceptance testing)
  • Scenario-based testing concentrates on what the
    user does, not what the product does. This means
    capturing the tasks (via use-cases) that the user
    has to perform, then applying them and their
    variants as test cases.

Two levels of test
  • Test of one class
  • Test single operations (white and black box)
  • Test sequences of operations (random test)
  • Test sequences of states (behavior test)
  • Test of sequences of operations and states is
    because in object-oriented code, expressing the
    expected-behavior or what the program does in
    term of input and output is not possible
    therefore, the codomain R is described by
    sequences of operations or states
  • Test of several classes with interacting objects

Pre pulsante P premuto Post la richiesta R è
qualè peso è possibile dimmi pulsanti da
avviarediminuire fermare
Le operazioni sono indipendenti a parte alcuni
vincoli che possono essere espressi con pre e
post condizioni
open setupAccount deposit
Behavior Testing
The tests to be designed should achieve all state
coverage KIR94. That is, the operation
sequences should cause the Account class to make
transition through all allowable states
Black box!
OO Software Inter-Class Testing
  • Inter-class testing to exercise interactions
    between classes
  • For each class, use the list of class operations
    to generate a series of random test sequences of
    operations. The operations will send messages to
    other classes.
  • For each message that is generated, determine the
    destination class and the corresponding
  • For each operation in the destination class (that
    has been invoked by incoming messages), determine
    the messages that it transmits.
  • For each of the messages, determine the next
    level of operations that are invoked and
    incorporate these into the test sequence

Black box if based on sequence diagrams!
White box if based on class code!
(No Transcript)
OO Software Additional tests
  • New issues
  • inheritance
  • genericity
  • polymorphism
  • dynamic binding
  • Open problems still exist

White box!
How to test classes of a hierarchy?
  • Flattening the whole hierarchy and considering
    every class as a totally independent unit
  • does not exploit incremental class definition
  • Finding an ad-hoc way to take advantage of the

A simple technique test case design for class
  • A test case that does not have to be repeated for
    any heir
  • A test case that must be performed for heir class
    X and all of its further heirs
  • A test case that must be redone by applying the
    same input data, but verifying that the output is
    not (or is) changed
  • A test case that must be modified by adding other
    input parameters and verifying that the output
    changes accordingly

Black Box testing concurrent and real-time
  • Non-determinism (of events driving the control
    flow) inherent in concurrency affects
    repeatability of failures
  • For real-time software (i.e. with time
    constraints), a test case consists not only of
    usual input data, but also of the time when such
    data are supplied (events)

Software Testing in the large
Software Testing Strategy
In the small component testing
unit test
integration test
Defect testing
In the large
system test
Validation (acceptance) test
Elaborated from Pressman
Software Architectures and Integration Testing
  • Software architectures provides at least the
    structure of a complex software therefore, it is
    natural to perform testing on integrated code by
    following the architecture

What should be tested!
  • We have tested units according to the expected
    behavior and (some forms) of unexpected behavior
  • This is largely related to functional
    requirements and to discovery related defects in
    code units testing as such is therefore related
    to the correctness of code and is performed on
    code with possible inputs (EB) or potential
    inputs (P), without attention to how inputs are
    provided and where the software is installed
  • What about non-functional requirements? They are
    usually related to other quality attributes than
  • We need to talk about a software system (not just
    the software but the software installed and
    running) for talking about the software running
    in its environment! The software system is then
    part of the whole system and it is therefore a
    subsystem of it (as many others).

Software System and Software
software code
Operating systems, Middleware,Compilers,
Interpreters, N of installation of the same
module, environment parameters
software system
software system (sub-system)
Assumptions during the requirement engineering!
Perfect technology assumption!
Separate concerns in testing
  • Testing for correctness is not enough, e.g.
  • Overload (stress) testing (reliability)
  • Robustness testing (test unexpected situations)
  • Performance testing (test response time)
  • These tests are typically related to some
    software quality attributes and non-functional
    requirements and usually performed on the
    software system (or subsystems) not on single

Testing Activities in the Software Process
System Requirements
System Test
The word system refers to the whole system
and to the software system. The software system
implicitly encompasses hardware and allocation of
software on hardware.
Levels of Testing
Type of Testing
Performed By
  • Programmer
  • Development team Independent Test Group
  • Independent Test Group
  • Customer/End Users
  • Low-level testing
  • Unit (module) testing
  • Integration testing
  • High-level testing
  • System testing
  • Acceptance testing

Integration Testing
Unit Testing
  • Done on individual units (or units of modules)
  • Test unit with white-box and black-box
  • Requires stubs and drivers
  • Unit testing of several units can be done in

What are Stubs, Drivers ?
  • Stub
  • dummy module which simulates the functionality of
    a module called by a given module under test
  • Driver
  • a module which transmits test cases in the form
    of input arguments to the given module under test
    and either prints or interprets the results
    produced by it

e.g. to test B in isolation
e.g. module call hierarchy
Stub for C
Come si fa a fare il test di integrazione?
Read (x,y,a) call P(x,y) if xthen write (test
ok) else write (test not ok)
P(x,y) x y 2 if x gt a then a a
2 else y x 3 a F(x, y) end if x x
a y
x y 2 if x gt a then a a 2 else y
x 3 a STUB_F(x, y) end if x x a y
Stub for F
If xlty then yx5 If xgty then yx7
Come si fa a fare il test della seguente unità?
Esecuzione simbolica, dirà che prima della
chiamata di F(x,y) xY2, aA e yY5 (con
x y 2 if x gt a then a a 2 else y
x 3 a F(x,y) end if x x a y
Definizione dei test cases di P (per esempio
Black Box) Es. A5, Y1, X7 Per cui deve essere
noto loutput di F(3,6)
Come si fa a fare il test di integrazione?
P(x,y) x y 2 if x gt a then a a
2 else y x 3 a STUB_F(x, y) end
if x x a y
Stub for F
Possibili Inputs per F Indicare manualmente i
possibili outputs
Identificare i test cases per P
Come si fa a fare il test di integrazione?
Read (x,y,a) call P(x,y) if xthen write (test
ok) else write (test not ok)
P(x,y) x y 2 if x gt a then a a
2 else y x 3 a F(x, y) end if x x
a y
x y 2 if x gt a then a a 2 else y
x 3 a STUB_F(x, y) end if x x a y
Stub for F
Write (x) Read (y)
Integration Testing
  • Test integrated modules (i.e. integrated code)
  • Usually focuses on interfaces (i.e. calls and
    parameters passing) between modules (defects are
    in the way modules are called)
  • Largely architecture-dependent

Integration Test Approaches
  • Non-incremental ( Big-Bang integration )
  • tests each module independently (black and white
  • combines all the modules to form the integrated
    code in one step, and test (usually black-box)
  • Incremental
  • instead of testing each module in isolation, the
    next module to be tested is combined with the set
    of modules that have already been tested
  • With two possible approaches Top-down, Bottom-up

  • Non-Incremental
  • requires more stubs,drivers
  • module interfacing defects are detected late
  • finding defects is difficult
  • Incremental
  • requires less stubs, drivers
  • module interfacing defects detected early
  • finding defects is easier
  • not all modules should be implemented for
    starting test
  • results in more thorough testing of modules

Top-down Integration
  • Begin with the top module in the module call
    hierarchy (represented as a structure chart)
  • Stub modules are produced
  • But stubs are often complicated
  • The next module to be tested is any module with
    at least one previously tested superordinate
    (calling) module (depth first or breadth first
  • After a module has been tested, one of its stubs
    is replaced by an actual module (the next one to
    be tested) and its required stubs

Example of a Module Hierarchy
Top-down Integration Testing
Stub B
Stub C
Stub D
Top-down Integration Testing
Stub C
Stub D
Test cases written for A are reused Test cases
white and black box for B should be combined with
test cases written for A
Stub F
Stub E
Bottom-Up Integration
  • Begin with the terminal (leaves) modules (those
    that do not call other modules) of the modules
    call hierarchy
  • A driver module is produced for every module
  • The next module to be tested is any module whose
    subordinate modules (the modules it calls) have
    all been tested
  • After a module has been tested, its driver is
    replaced by an actual module (the next one to be
    tested) and its driver

Example of a Module Hierarchy
Bottom-Up Integration Testing
Driver E
Driver F
Bottom-Up Integration Testing
Driver B
Test cases white and black box for B
  • Top-down Integration
  • Advantage
  • a skeletal version of the program exists early
  • Disadvantage
  • required stubs could be expensive
  • Bottom-up Integration
  • Disadvantage
  • the program as a whole does not exist until the
    last module is added

No clear winner
  • Effective alternative -- use hybrid of bottom-up
    and top-down
  • - prioritize the integration of modules based on
  • - highest risk modules are integration tested
  • than modules with low risk

Regression Testing
  • Re-run of previous test cases to ensure that
    software already tested has not regressed to
    earlier defects after making changes (or
    integration) to the software
  • Regression testing can also be performed during
    the entire life a software
  • Reusability of test cases is the key point!

Types of System and Acceptance Testing
(Sub)System Testing
  • Process of attempting to demonstrate that system
    (or subsystem) does not meet its original
    requirements and objectives as stated in the
    requirements (specification) document i.e. it is
    a defect testing
  • Usually, it is not only a code testing but a test
    on the software system
  • Test cases derived from
  • system objectives, user scenarios, possibly
    during system engineering or early requirement
  • requirement document and software requirements
    specification (analysis model)
  • expected quality stated during the design
  • additional aspects related to deployment of code

Usual Types of Software System Testing
  • Volume testing
  • to determine whether the system can handle the
    required volumes of data, requests, etc.
  • Load/Stress testing
  • to identify peak load conditions at which the
    system will fail to handle required processing
    loads within required time spans
  • Usability (human factors) testing
  • to identify discrepancies between the user
    interfaces of the system (software) and the human
    engineering requirements of its potential users.
  • Security Testing
  • to show that the systems security requirements
    can be subverted

Usual Types of Software System Testing
  • Performance testing (also as code testing)
  • to determine whether the system meets its
    performance requirements (eg. response times,
    throughput rates, etc.)
  • Reliability/availability testing
  • to determine whether the system meets its
    reliability and availability requirements (here
    availability is related to failure however
    availability may only be related to out of
    service situations not necessarily related to
  • Recovery testing
  • to determine whether the system meets its
    requirements for recovery after a failure

Usual Types of Software System Testing
  • Installability testing
  • to identify ways in which the installation
    procedures lead to incorrect results
  • Configuration testing
  • to determine whether the system operates properly
    when the software or hardware is configured in a
    required manner
  • Compatibility testing
  • to determine whether the compatibility
    (interoperability) objectives of the system have
    been met
  • Resource usage testing
  • to determine whether the system uses resources
    (memory, disk space, etc.) at levels which exceed
  • Others

Alpha and Beta Testing
  • Acceptance testing performed on the developed
    software before its released to the whole user
  • Alpha testing
  • conducted at the developer site by End Users (who
    will use the software once delivered)
  • tests conducted in a controlled environment
  • Beta testing
  • conducted at one or more customer sites by the
    End Users
  • it is a live use of the delivered software in
    an environment over which the developers has no

Stop conditions for Defect Test
When to Stop Defect Testing ?
  • Defect testing is potentially a never ending
  • However, an exit condition should be defined,
  • Stop when the scheduled time for testing expires
  • Stop when all the test cases execute without
    detecting failures
  • but both criteria are not good

Better Code Defect Testing Stop conditions
  • Stop on use of specific test-case design
  • Example Test cases derived from
  • 1) satisfying multiple condition coverage and
  • 2) boundary-value analysis and
  • 3) .
  • .
  • all resultant test cases are eventually
    unsuccessful (i.e they do not lead to failures)

Better Code Defect Testing Stop condition 1
  • Sia ND il numero dei difetti
  • Inserire nello unit un insieme di difetti NDI
  • Far eseguire il test (a qualcuno che non conosce
    i difetti inseriti) attraverso un certo numero di
  • Lefficacia di tale test è quindi
  • NumeroDifettiInseritieScoperti/NumeroDifettiInseri
  • (NDIS/NDI)
  • Nellipotesi che i difetti siano simili si può
    dire che
  • (NDS/ND)(NDIS/NDI) e quindi
  • NDNDINDS/NDIS ove NDS è il NumeroDifettiScopert

Better Code Defect Testing Stop condition 2
  • Miglioramento con due gruppi indipendenti di test
    che trovano X e Y difetti, di cui Q sono comuni
  • ND, numero totale dei difetti, è quindi pari a
    NDXY/Q (poiché si ipotizza che X/NDQ/Y)

System Testing Stop condition
  • Stop in terms of failures (rate) to be found and
    therefore in term of time to be spent in testing
  • This stop condition is closely related to the
    reliability of the software system

Test Automation
Steps in Test Cases definition and Execution
Test tools
  • Task for locating and correcting defects in
    software code
  • It can start when a failure has been detected
  • It usually performed during test
  • Need sometime the definition of an intermediate
    concept, error i.e. a situation leading to the
    failure and due to the defect
  • Requires closing up the gap between a fault and
  • watch points
  • intermediate assertions

What is seen from an external observer
defect (fault) ? error ? failure
The cause
What is recognized as non correct situation
Autodebugging, System management and
  • Detect errors and alert on them, may stop or may
    not stop the execution (leading to failure)
  • Detect errors and undertake a fault management
    strategy (recovery, alternatives etc.) that
    allows to tolerate the fault!

Performance, Reliability TestingQuality
Assessment for subjective quality attributes
  • Types of performance analysis (can also be
    applied to code)
  • Worst case analysis
  • focus is on proving that the (sub)system response
    time is bounded by some function of the external
    requests and parameters
  • Average behavior
  • Analytical vs. experimental approaches, an both
    may concern the (software) system
  • Queue models, statistics and probability theory
    (Markov chains)
  • Simulation
  • Others

Correctness review
  • Correcteness is an absolute feature of software
    with a binary result (the software is correct,
    the software is not correct)
  • Typically, correcteness is expressed in term of
    functional requirements or component
    specifications derived from functional
  • Less important for real systems where hypotheses
    (made during requirement engineering or as
    perfect technology assumptions) on which these
    systems are built are only true in probability or
    sometime false
  • Correcteness can be reformulated as
  • Reliability (probability to work without failures
    over a time period)
  • Robustness (management of unexpected situations
    (i.e. failures elsewhere))
  • Safety (probability that something does not

Reliability (1)
  • There are approaches to measure reliability on a
    probabilistic basis, as in other engineering
    fields, i.e. the probability the (software)
    system will work without failure over a period of
    time and under some conditions (shortly,
    probability to do not fail within a time frame)
  • Unfortunately, there are some difficulties with
    this approach
  • independence of failures does not hold for

If xgt0 then write(y) else (write(x) write(z))
X is wrongly assigned to 7 instead of 6 Z is
wrongly assigned
X is wrongly assigned to 0 instead of 1 Z is
wrongly assigned
Reliability (2)
  • Reliability is firstly concerned with measuring
    the probability of the occurrence of failures
  • Meaningful parameters include
  • average total number of failures observed at time
    t AF(t)
  • failure intensity FI(t)AF'(t)
  • mean time to fail at time t MTTF(t)1/FI(t)
  • mean time between failures MTBF(t)MTTF(t)MTTR(t)
    (MTTR corresponds to time needed after a
    failure, to repair)
  • Time is the execution time but also the calendar
    time (because in part of the software system can
    be shared with other software systems)

Basic reliability model
  • Assumes that the decrement per failure observed
    (i.e., the derivative with respect to the number
    of observed failures) of the failure intensity
    function is constant
  • i.e., FI is a function of AF
  • FI(AF) FI0 (1 - AF/AF8)
  • where FI0 is the initial failure intensity and
    AF8 is the total number of failures
  • The model is based on optimistic hypothesis that
    a decrease in failures is due to the fixing of
    defects, sources of those failures

AF law
Basic model
Uso del modello base
Stima Af e ? Calcola il tempo t per cui
Af(t)Af(T) 1 ove T è il tempo cui si è
arrivati con il test e si sono osservati Af(T)
failure (quindi AfAF(T) indica il numero di
failure ancora osservabili) Fare il test per
almeno affinché il sistema sia eseguito per
almeno t-T in modo da osservare un ulteriore
failure, se vi sono ancora difetti
Assessment of subjective (less factual) quality
  • Quality assessment on code of subjective quality
  • Consider quality attribute like maintainability,
    reusability, understandability
  • There is the need of metrics

Internal and external attributes of quality
Software quality attributes (also called external
Internal quality attributes
McCabe's source code metric
  • Cyclomatic complexity C on the control graph is C
    e - n 2p
  • Where e is edges, n is nodes, p is
    connected components
  • McCabe contends that well-structured modules
    (i.e. high quality) have C in range 3 .. 7, and C
    10 is a reasonable upper limit for the
    cyclomatic complexity of a single module
  • confirmed by empirical evidence

Halstead's software science
  • Tries to measure some software qualities by
    measuring some quantities on code, such as
  • n1, number of distinct operators in the program
  • n2, number of distinct operands in the program
  • N1, number of occurrences of operators in the
  • N2, number of occurrences of operands in the

N n1 log2 n1 n2 log2 n2 (length of the
program) V (N1N2) log2 (n1n2) (volume of the
program) --- error in Pressman, N instead of N1N2
Halstead's software science
  • Other than software qualities, quantities on code
    can be used to estimate interesting features of
  • Mental Effort (effort required to understand and
    further develop a program)
  • E (n1) (N2) (N1N2) log2(n1n2) / 2(n2)
  • Estimated Number of Defects
  • B E(2/3) / 3000

if ( A gt 1) and ( B 0 ) then X X / A if
( A 2 ) or ( X gt 1) then X X 1
n1 6 (inclusi operatori logici) N1 8 n2
3 N2 7
E(n1) (N2) (N1N2) log2(n1n2) / 2(n2) E6 7
(87) log2(63)/2 3 333 (circa)
  • Testing in generale (anche in relazione con la
    verifica e la validazione e la più generale
    assicurazione di qualità, distinta in previsione
    della qualità e valutazione della qualità)
  • Testing Componenti Convenzionali (Black White
  • Tecniche statiche (di Verifica) e Conventional
    Unit Code Testing (Inspection, Walkthrougth,
    Symbolic Execution, Correcteness Proof)
  • Testing Componenti Object-Oriented
  • Testing in the large (Integration and System
  • Testing attributi soggettivi di qualità (diversi
    da correttezza, affidabilità, robustezza, safety
    e prestazioni)