Title: TQS Teste e Qualidade de Software Software Testing and Quality Software Testing Concepts
1TQS - Teste e Qualidade de Software(Software
Testing and Quality) Software Testing Concepts
João Pascoal Faria jpf_at_fe.up.pt
www.fe.up.pt/jpf
2Software testing
- Software testing consists of the dynamic (1)
verification of the behavior of a program on a
finite (2) set of test cases, suitably selected
(3) from the usually infinite executions domain,
against the specified expected (4) behavior
source SWEBOK - (1) testing always implies executing the program
on some inputs - (2) for even simple programs, too many test cases
are theoretically possible that exhaustive
testing is infeasible - trade-off between limited resources and schedules
and inherently unlimited test requirements - (3) see test case design techniques later on how
to select the test cases - (4) it must be possible to decide whether the
observed outcomes of the program are acceptable
or not - the pass/fail decision is commonly referred to as
the oracle problem
3Test cases
- Test case - Inputs to test the system and the
expected outputs from these inputs if the system
operates correctly - Inputs may include an initial state of the system
- Outputs may include a final state of the system
- When test cases are executed, the system is
provided with the specified inputs and the actual
outputs are compared with the outputs expected - Example for a calculator
- 3 5 (input) should give 8 (output)
4Purpose of software testing
- The purpose of software testing is to find
defects, and not to prove the absence of defects! - "The goal of a software tester is to find bugs,
find them as early as possible, and make sure
that they get fixed"(source Ron Patton) - The best test cases are the ones with higher
probability of revealing defects / bugs - A secondary goal is to assess software quality
5Test types
Level or phase
system
integration
unit
Accessibility (test case design
strategy/technique)
security
white box (or structural)
black box(or functional)
robustness
performance
usability
reliability
functional behaviour
focus here
Quality attributes
6Test levels or phases (1)
- Unit testing
- Testing of individual program units or components
- Usually the responsibility of the component
developer (except sometimes for critical systems) - Tests are based on experience, specifications and
code - A principal goal is to detect functional and
structural defects in the unit
7Test levels or phases (2)
- Integration testing
- Testing of groups of components integrated to
create a sub-system - Usually the responsibility of an independent
testing team (except sometimes in small projects) - Tests are based on a system specification
(technical specifications, designs) - A principal goal is to detect defects that occur
on the interfaces of units and their common
behavior
8Test levels or phases (3)
- System testing
- Testing the system as a whole
- Usually the responsibility of an independent
testing team - Tests are usually based on a requirements
document (functional requirements/specifications
and quality requirements) - A principal goal is to evaluate attributes such
as usability, reliability and performance
(assuming unit and integration testing have been
performed)
(source I. Sommerville)
9Test levels or phases (4)
- Acceptance testing
- Testing the system as a whole
- Usually the responsibility of the customer
- Tests are based on a requirements specification
or a user manual - A principal goal is to check if the product meets
customer requirements and expectations
(source I. Sommerville)
10Test levels or phases (5)
- Regression testing
- Repetition of tests at any level after a software
change
(source I. Sommerville)
11Test-Driven Development (TDD)
- Development approach appropriate for unit
testing - The rhythm of Test-Driven Development (TDD) can
be summed up as follows - 1. Quickly add a test.
- 2. Run all tests and see the new one fail.
- 3. Make a little change.
- 4. Run all tests and see them all succeed.
- 5. Refactor to remove duplication.
- (Kent Beck, Test-Driven Development,
Addison-Wesley, 2003)
12Test levels and the extended V-model of software
development
Execute acceptance tests
Specify Requirements
Execute system tests
System/acceptance test plan test cases
review/audit
Requirements review
Specify/Design Code
System/acceptance tests
Execute integration tests
Design
Integration test plan test cases review/audit
Design review
Specify/Design Code
Integration tests
Execute unit tests
Code
Unit test plan test cases review/audit
Code reviews
(source I. Burnstein, pg.15)
Specify/Design
Code Unit tests
13Good practices
- Test as early as possible
- Write the test cases before the software to be
tested - applies to any level unit, integration or system
- helps getting insight into the requirements
- Code the test cases
- because of the frequent need for regression
testing (repetition of testing each time the
software is modified) - The more critical the system the more independent
should be the tester - colleague, other department, other company
- Be conscious about cost
- Derive expected test outputs from specification
(formal/informal, explicit/implicit), not from
code
14References and further reading
- Practical Software Testing, Ilene Burnstein,
Springer-Verlag, 2003 - Software Testing, Ron Patton, SAMS, 2001
- Testing Computer Software, 2nd Edition, Cem
Kaner, Jack Falk, Hung Nguyen, John Wiley Sons,
1999 - Software Engineering, Ian Sommerville,
6th Edition, Addison-Wesley, 2000 - Guide to the Software Engineering Body of
Knowledge (SWEBOK), IEEE Computer Society,
http//www.swebok.org/