Software Testing Cmpe516 Fault Tolerant Computing - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

Software Testing Cmpe516 Fault Tolerant Computing

Description:

... fields such as in music, speech, sports, and civil engineering. ... the major functions of a piece of software work without bothering with finer details. ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 40
Provided by: aaa146
Category:

less

Transcript and Presenter's Notes

Title: Software Testing Cmpe516 Fault Tolerant Computing


1
Software TestingCmpe516 Fault Tolerant Computing
  • Serkan Utku Öztürk
  • 2003700443 MS
  • utku_at_utkuozturk.com

2
Agenda
  • Examples
  • Test Assessment
  • Axiomatic Approach
  • Types of Testing

3
  • The purpose of testing is to uncover errors, not
    to certify correctness.

4
Error Elimination or Reduction?
  • In most practical situations, total error
    elimination is a myth.
  • Error reduction based on the economics of
    software development is a practical approach.

5
Errors Examples
  • TeX (Knuth) 850 errors over a 10 year period.
  • Windows 95 large error database maintained by
    Microsoft (proprietary)
  • Several other error studies published.
  • Error studies have also been published in other
    diverse fields such as in music, speech, sports,
    and civil engineering.

6
Languages and Errors
  • The programming language used has no known
    correlation with the complexity of the errors one
    can make.
  • It also has no known correlation to the number of
    errors in a program.

7
Human Capability and Errors
  • Errors are made by all kinds of people regardless
    of their individual talents and background.
  • Well known programmers make errors that are also
    made by freshmen in programming courses.
  • Example SimCity

8
Errors Consequences
  • An error might lead to a failure.
  • The failure might cause a minor inconvenience or
    a catastrophe.
  • The complexity of an error has no known
    correlation with the severity of a failure. The
    misplaced break is an example of a simple error
    that caused the ATT phone-jam in 1990.

9
Errors Unavoidable!
  • Errors are bound to creep into software.
  • This belief enhances the importance of testing.
  • Errors that creep in during various phases of
    development can be removed using a well defined
    and controlled process of software testing.

10
System/Software Testing
  • Error detection and removal
  • determine level of reliability
  • well-planned procedure - Test Cases
  • done by independent quality assurance
    group (except for unit testing)

11
What is Test Assessment?
  • Given a test set T, a collection of test inputs,
    we ask
  • How good is T?
  • Measurement of the goodness of T is test
    assessment.
  • Test assessment is carried out based on one or
    more test adequacy criteria.

12
Test Assessment-continued
  • Test assessment provides the following
    information
  • A metric, also known as the adequacy score or
    coverage, usually between 0 and 1.
  • A list of all the weaknesses in T, which when
    removed, will raise the score to 1.
  • The weaknesses depend on the criteria used for
    assessment.

13
Test Assessment-continued
  • Once coverage has been computed, and the
    weaknesses identified, one can improve T.
  • Improvement of T is done by examining one or more
    weaknesses and constructing new test requirements
    designed to overcome the weaknesses.
  • The new test requirements lead to new test
    specifications and to further testing of the
    program.

14
Test Assessment-continued
  • This is continued until all weaknesses are
    overcome, i.e. the adequacy criterion is
    satisfied (coverage1).
  • In some instances it may not be possible to
    satisfy the adequacy criteria for one or more of
    the following reasons
  • Lack of sufficient manpower
  • Weaknesses that cannot be removed because they
    are infeasible.

15
Test Assessment-continued
  • The cost of removing the weaknesses is not
    justified.
  • While improving T by removing its weaknesses, one
    usually tests the program more thoroughly than it
    has been tested so far.
  • This additional testing is likely to result in
    the discovery of some or all of the remaining
    errors.

16
Test Assessment-Summary
0
Develop T
Select an adequacy criterion C.
1
2
Measure adequacy of T w.r.t. C.
Yes
3
Is T adequate?
No
Yes
4
Improve T
More testing is warranted ?
5
No
6
17
Principle Underlying Test Assessment
  • A uniform principle underlies test assessment
    throughout the testing process.
  • This principle is known as the coverage
    principle.
  • It has come about as a result of extensive
    empirical studies.

18
Coverage Domains
  • To formulate and understand the coverage
    principle, we need to understand
  • coverage domains
  • coverage elements
  • A coverage domain is a finite domain that we want
    to cover. Coverage elements are the individual
    elements of this domain.

19
The Coverage Principle
  • Measuring test adequacy and improving a test set
    against a sequence of well defined, increasingly
    strong, coverage domains leads to improved
    reliability of the system under test.

20
When Should Testing Stop?
  • Terminate when an adequacy criterion is met
  • a measure of how well the testing process has
    been performed
  • relates a test set to the program, the
    specification, or both
  • could relate a test set to the programs
    operational profile

21
Axiom (definition)
  • an established principle a self-evident truth

22
Axiom 1
  • Axiom 1 (Applicability) For every program, there
    exists a finite adequate test set.
  • If the adequate test set is infinite (or even
    very large) it would be impossible to satisfy the
    criterion.

23
Axiom 2
  • Axiom 2 (Non-exhaustive Applicability) There is
    a program P and a test set T such that P is
    adequately tested by T, and T is not an
    exhaustive test set.
  • Exhaustive testing (testing every point in the
    domain) can be assumed to be adequate. An
    important part of testing is identifying a subset
    of the test domain which represents the whole
    domain

24
Axioms 3
  • Axiom 3 (Monotonicity) If T is adequate for P,
    and T is a subset of T, then T is adequate for
    P.
  • The addition of more test cases to an already
    adequate set does not make it inadequate.
    Therefore, the subset of the domain does not have
    to be the smallest possible subset. In some
    instances a larger subset might be easier to
    identify.

25
Axiom 4
  • Axiom 4 (Inadequate Empty Set) The empty set is
    not adequate for any program.
  • A program cannot be assumed to be adequate if it
    has not been tested at all. Dynamic testing
    assumes that the program has at least one input.

26
Axioms 5
  • Axiom 5 (Anti-extensionality) There are programs
    P and Q such that P is equivalent to Q, T is
    adequate for P, but T is not adequate for Q.
  • These axioms support program-based testing, not
    functionality or specification-based testing.
    Therefore, two programs which perform the same
    function but are implemented differently would
    require different test sets.

27
Axiom 6
  • Axiom 6 (General Multiple Change) There are
    programs P and Q which are the same shape, and a
    test set T such that T is adequate for P, but T
    is not adequate for Q.
  • Programs with the same shape (structure) may
    require different data sets to traverse their
    structure

28
Axiom 7
  • Axiom 7 (Anti-decomposition) There exists a
    program P and a component Q such that T is
    adequate for P, T is the set of vectors of
    values that variables can assume on entrance to Q
    for some t of T, and T is not adequate for Q.
  • The program P taken as a whole may not be able
    to stimulate all possible actions of component Q.
    Q must be tested in its own right.
  • Example component Q could be a sorting
    routine. What if program P always provides data
    already sorted?

29
Axiom 8
  • Axiom 8 (Anti-composition) There exist programs
    P and Q such that T is adequate for P and P(T) is
    adequate for Q, but P is not adequate for PQ.
  • Testing must take into account the added
    complexity and interaction which occurs when two
    programs are composed. Hence the traditional
    need for integration testing.

30
Summary of Axiomatizing
31
Test Strategy
  • UNIT TESTING (Module testing)
  • debuggers, tracers
  • programmers
  • INTEGRATION TESTING
  • communication between modules
  • start with one module, then add incrementally
  • SYSTEM TESTING
  • manual procedures, restart and recovery, user
    interface
  • real data is used
  • users involved
  • ACCEPTANCE TESTING
  • user-prepared test data
  • Verification Testing, Validation testing, Audit
    Testing

32
Types of Testing
  • White Box Testing
  • knowing internal working ,and exercising
    different parts
  • test various paths through software if
    exhaustive testing is impossible, test high-risk
    paths
  • Black Box Testing
  • knowing functions to be performed and testing
    whether these are performed properly
  • correct outputs from inputs. DBs accessed/updated
    properly
  • test cases designed from user requirements
  • appropriate at Integration, Systems and
    Acceptance testing levels

33
Types of Testing(cont.)
  • Compatibility Testing
  • Testing to ensure compatibility of an
    application or Web site with different browsers,
    OSs, and hardware platforms
  • Conformance Testing
  • Verifying implementation conformance to industry
    standards

34
Types of Testing(cont.)
  • Functional Testing
  • Validating that an application or Web site
    conforms to its specifications and correctly
    performs all its required functions.
  • Load Testing
  • Load testing is a generic term covering
    Performance Testing and Stress Testing.

35
Types of Testing(cont.)
  • Performance Testing
  • Performance testing can be applied to understand
    your application or WWW site's scalability, or to
    benchmark the performance in an environment of
    third party products such as servers and
    middleware for potential purchase.

36
Types of Testing(cont.)
  • Regression Testing
  • Similar in scope to a functional test, a
    regression test allows a consistent, repeatable
    validation of each new release of a product or
    Web site.
  • Smoke Testing
  • A quick-and-dirty test that ensures the major
    functions of a piece of software work without
    bothering with finer details.

37
Types of Testing(cont.)
  • Stress Testing
  • Testing conducted to evaluate a system or
    component at or beyond the limits of its
    specified requirements to determine the load
    under which it fails and how.
  • Unit Testing
  • Functional and reliability testing in an
    Engineering environment.

38
Conclusion
  • Test Early / Test Often
  • Developers perform unit testing
  • Testers perform integration system testing
  • Everyone owns a stake in the product development
    lifecycle
  • Testing requires a scientific approach. It is
    crucial to have robust and reliable softwares.

39
  • Any Question ?
Write a Comment
User Comments (0)
About PowerShow.com