System Test Planning and the usefulness of a - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

System Test Planning and the usefulness of a

Description:

Latest time to consider data to be valid ... The most recent? September, 2006 ... Safety checklist has been demonstrated to 'ask the right questions' ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 53
Provided by: ruthda
Category:

less

Transcript and Presenter's Notes

Title: System Test Planning and the usefulness of a


1
System Test Planningand the usefulness of a
Safety Checklist
  • ECEN5543

2
Plan for class period
  • Additional notations to analyze requirements,
    prepare for design and prepare for system test
    plan
  • Event tables
  • State transition tables
  • Decision tables (plus an old but useful
    shorthand), aka condition tables
  • Overview of system test categories
  • What to use to determine tests in each category
  • Includes analysis of Safety Checklist with
    respect to stand-alone software
  • (Do Safety Checklist on use case as an
    experiment.)

3
More Requirements Analysis ToolsUseful for
System Testing
  • Examples from Software Engineering Concepts,
    Richard Fairley, McGraw , Hill, 1985. Out of
    print.

4
System Sequence Diagram -- SSD
External actors interaction with system Must
define scope to know what is external Actors can
be other programs, other products, people High
Level view Sequence diagrams are part of
UML Same rules used to create SYSTEM sequence
diagrams
5
SSD
6
State Transition Table for split routine
Present State Input Actions Outputs Next State
S0 DS Open F6 Open F7 S1
S1 D11 Write F6 D11 F6 S1
D12 Close F6 Write F7 D12 F7 S2
DE Close F6 Close F7 S0
S2 D12 Write F7 D12 F7 S2
DE Close F7 S0
7
Draw the state transition diagram for this state
transition table
  • Notation reminders
  • Circle state
  • Arc transition
  • X/y input/output

8
Present State Input or Event Action Output Next
State ST1. Idle card inserted request for
PIN Waiting for PIN ST2. Waiting for PINPIN
entered display asterisks Validating PIN ST3.
Waiting for PINcancel display msg Ejecting ST4.
Validating PINindicates valid display
choices Waiting for customer transaction
choice ST5. Validating PINindicates
stolen display stolen confiscating ST6.
Validating PINindicates invalid display
invalid Waiting for PIN ST7. Waiting for
customer transaction choice
Cancel display cancel Ejecting ST8. Waiting
for customer transaction choice Balance
Query selected Processing query continued on
next slide
9
ST9. Waiting for customer transaction
choice Withdrawal selected Processing
w/d ST10. confiscating Card confiscated termina
ting ST11. Processing query Rejected for this
user display rejected Ejecting ST12.
Processing query Query OK display printing
printing ST13. Processing withdrawal ok
amount display ok msg dispensing ST14.
Processing withdrawal not ok amount display
refusal Ejecting ST15. Printing transaction
complete print receipt ejecting ST16.
Dispensing sufficient cash in ATM cash printing S
T17. Dispensing insufficient cash in ATM disp
insufficient cash ejecting ST18.
Ejecting card ejection started display msg to
take card terminating ST19.
terminating card ejection complete display ending
msg idle
10
State Transition Diagram - incomplete
card inserted/
PIN inserted/
waiting for PIN
Idle
validating PIN
invalid
card ej complete
cancel
stolen
ejecting
valid
confis-cating
terminat-ing
waiting for cust xaction
card confisd
cancel
11
Data flow diagram for split
Notation Circle action Arc data
direction Arc label data label data
sink label data source
data store
12
Data Flow Diagram for ATM
transaction request
ATM customer
display transactions
Validate User
validated user
dispatch request
card
request for pin
pin
13
2-dimensional event table
EVENT EVENT
Mode E13 E37 E45 E45
Start-up A16 ---- A14 A32 A14 A32
Steady X A6, A2 --- ---
Shut-down --- --- --- ---
Alarm --- --- --- ---
Actionaction sequential actions. Action,
action concurrent actions. X impossible. ---
no action required.
14
Decision table
1 2 3 4
Credit limit is satisfactory Y N N N
Pay experience is favorable -- Y N N
Special clearance obtained -- -- Y N
Perform the approve-order X X X
Reject order X
Actions Conditions
X do that action Y yes, N no, -- dont
care
15
Decision tables
  • Ambiguous two identical Rule columns with
    different actions
  • Redundant identical Rule columns and identical
    actions
  • Incomplete failure to specify an action for a
    Rule column
  • Karnaugh map is more succinct

16
Incomplete and multiply-specifieddecision table
as a Karnaugh map
C2 C3
A1 A2 A1 A2 A2 A2
A1 A1 A3
C1
Example from Software Engineering Concepts,
Richard Fairley, McGraw , Hill, 1985. Out of
print.
17
Incomplete and multiply-specifieddecision table
C2 C3
A1 A2 A1 A2 A2 A2
A1 A1 A3
C1
Example from Software Engineering Concepts,
Richard Fairley, McGraw , Hill, 1985. Out of
print.
18
Equivalent to this decision table(incomplete
and multiply-specified)
C1 Y d N
C2 d Y N
C3 d d Y
A1 X
A2 X
A3 X
Example from Software Engineering Concepts,
Richard Fairley, McGraw , Hill, 1985. Out of
print.
19
General Test Categories
  • Functional
  • Performance
  • Stress
  • --------------------- not system test------------
  • Glass-box, sometimes called white-box

20
Functional
  • Success Scenario paths of Use Cases
  • All alternate paths of Use Cases if
    intentionally not implemented in a particular
    release, how is their absence handled?

21
Performance
  • Performance
  • How does the system perform under normal
    conditions?
  • Is it adequate?
  • WHAT is performance depends on application
  • Can be extended to include those quality ranges
    that can be tested

22
Stress!!
  • How does system behave under unreasonable
    conditions?
  • Evaluates robustness

23
Performance vs.
Stress!!
  • Specified performance criteria are tested as
    performance tests (duh!)
  • Unspecified performance criteria are tested as
    stress conditions
  • Stress tests also include conditions outside the
    specified performance criteria

24
How do we decide what to test for performance and
stress conditions?
  • Targeting Safety-Related Errors During Software
    Requirements Analysis, Robyn R. Lutz, JPL,
    CalTech, Proceedings of the ACM SigSoft Symposium
    on the Foundations of Software Engineering, 1993
  • The requirements discussed in the above paper
    provide excellent guidelines

25
Safety Checklist for safety-critical embedded
systems
  • Two most common causes of safety-related software
    errors
  • Inadequate interface requirements
  • Robustness issues -- discrepancies between
  • The documented requirements
  • Requirements actually needed for correct
    functioning
  • Usage of the checklist reduces safety-related
    software errors

26
Earlier Study of Causes of S-R SW Errors ref.
11 in the paper
  • Errors identified as potentially hazardous to a
    system tend to be produced by different error
    mechanisms than non-safety-related software
    errors
  • S-R sw errors found during integration testing
  • Misunderstandings of sws interfaces to rest of
    sys
  • Discrepancies between documented reqs necessary
    requirements for correct functioning, i.e.,
    robustness
  • (In other words, the documented requirements are
    inadequate and therefore wrong.)

27
Whats an error? A discrepancy between
  • Computed
  • Observed
  • Measured value or condition
  • True
  • Specified
  • Theoretically correct value or condition

Lutz Safety-related if the systems safety
analyst determines during standard
error-correction process that the error
represents potentially significant or
catastrophic failure effects.
28
Effort to target known causes during requirements
phase
  • Interfaces Challenge
  • Specify correctly the software/system interfaces
    in complex embedded systems with software
    distributed among various hardware components
    some of which are not yet determined
  • Robustness Challenge
  • Many s-r software errors involve inadequate
    software responses to extreme conditions or
    extreme values (stress conditions)
  • For extreme, read boundary and beyond

29
Formal vs. Informal
  • Lutz Safety Checklist is ok for a development
    process without formal specification languages
    (whew!)
  • And without (complete) finite-state-machine
    modeling
  • DOES focus on software/system interfaces, failure
    modes, timing, and boundary conditions and values

30
2 key components of any sw requirements
methodology
  • To aid in determining the requirements
  • To represent the software requirements in
    specifications
  • This Safety Checklist focuses on first, still
    must incorporate in specification
  • Determination technique does not preclude any
    specification techniques

31
NOTE Benefit of Formal Inspections
  • Example given is a formal inspection on 2,000,000
    lines of code seismic processing
  • Can add the Safety Checklist to any other
    checklists being used in requirements reviews
    which are the first stage of system testing
    (prior to architecture design)

32
The Formal Approach
  • Build a formal, finite-state model of the
    requirements specifications
  • Analyze the model to ensure its properties match
    the desired behavior
  • State criteria (as formal predicates logical
    relationships) that must hold in the model

33
Lutz Less Formal Approach
  • Translate the criteria into informal,
    natural-language format
  • Formatting concerns as a checklist avoids need to
    build the complete finite-state model

34
Less Is More Informal process extended to
embedded systems
  • Safety Checklist allows for
  • Multiple processors
  • Concurrently executing processes
  • Redundant resources to manage
  • Externally commanded state changes
  • State changes not visible
  • Typical of many complex, embedded systems with
    timing constraints and safety-critical functions
  • Even in stand-alone system, can have operating
    system environment, apparent concurrency,
    externally commanded state changes, invisible
    state changes

35
Checklist regarding Interfaces
36
Interfaces Is the softwares response specified
  • For every input, what is the response to
    out-of-range values?
  • For not receiving an expected input ?
  • If appropriate to time out a. length, b. when
    to start counting, c. latency
  • If input arrives when it shouldnt
  • On given input, will software always follow the
    same path? (deterministic)
  • If not, is that a problem?

37
Interfaces, continued
  • Is each input bounded in time?
  • Specify earliest time to accept input
  • Latest time to consider data to be valid
  • 6. Is minimum and maximum arrival rate specified
    for each input? For each communication path?
  • If interrupts are masked or disabled, can events
    be lost?
  • Include cooperating processes in apparent
    concurrency

38
Interfaces, continued 2
  1. Can output be produced faster than it can be used
    (absorbed) by the interfacing module? Is
    overload behavior specified?
  2. Is all data output to the buses from the
    sensors used by the software? If not, is some
    required function omitted from the spec?
  3. Can input received before startup, while offline,
    or after shutdown influence the next software
    startup behavior? Are any values retained in hw
    or sw? The earliest? The most recent?

39
Checklist regarding Robustness
40
Robustness
  • 11. If performance degradation is the chosen
    response, is the degradation predictable?
  • 12. Are there sufficient delays incorporated into
    the error-recovery responses, e.g. dont return
    to normal state prior to managing the error
  • 13. Are feedback loops specified where
    appropriate to compare the actual effects of
    outputs on the system with the predicted effects?

41
Robustness, continued
  1. Are all modes and modules reachable (used in some
    path through the code)? Superfluous? Other
    logic error?
  2. If a hazards analysis has been done, does every
    path from a hazardous state (a failure-mode) lead
    to a low-risk state?
  3. Are the inputs identified which, if not received,
    can lead to a hazardous state or can prevent
    recovery (single-point failures)?

42
Usefulness?
  • Safety checklist has been demonstrated to ask
    the right questions
  • Not sufficient to preclude introducing errors
  • Necessary although not sufficient

43
May I have the envelope, please
  • Not every hazardous state led to a low-risk
    state.
  • Error-recovery responses incorporated
  • insufficient delays.
  • Input arrived when it shouldnt and no
    response
  • was specified response defaulted to
  • unintended behavior.
  • Response not specified for out-of-range
    values
  • that were possible for some inputs
  • 5 Output produced too fast for interfacing module

3
44
Allows tailoring
  • Focuses on historically troublesome aspects of
    safety-critical, embedded software
  • Avoids over-specification of well-understood or
    low-risk requirements
  • Tailor to level of technical or historical risk

45
First step toward safety constraints
  • Many items that it identifies are system hazards
  • Can be used to identify safety constraints
  • Not yet ready for formal prediction
  • How use for informal prediction of error prone
    factors

46
After Requirements Are Improved
  • How do we ensure that requirements are
    implemented and maintained?
  • After code is written (new code or bug fixes)
    note difficult to unit test these issues
  • After new requirements are added
  • After old requirements are modified
  • Role of reviews
  • Code the invariants where appropriate
  • System tests to test use cases and the safety
    checklist

47
Create a system test plan IEEE Std. 829
  1. Test the Success Scenario and conditions that
    lead to alternative paths of use cases
  2. If possible, test to verify the relevant safety
    checklist items safety may not be main
    concern but correct interfaces and robustness
    are.
  3. If any resources are shared among processes,
    review and test for correctness of mutual
    exclusion. (SW Eng of Multi-program Sys)
  4. If cooperating processes, verify suspension
    happens correctly, a suspended process restored
    when appropriate, restoration correct.

48
IEEE 829 Standard Test Plan Outline - 1
  • 1.0 Introduction
  • 2.0 Test Items
  • 3.0 Tested Features
  • 4.0 Features Not Tested (per cycle)
  • Repeat 5.0 for each system level test.
  • 5.0 Testing Strategy and Approach
  • 5.1 Syntax
  • 5.2 Description of functionality
  • 5.3 Arguments for Test
  • 5.4 Expected Output
  • 5.5 Specific Exclusions
  • 5.6 Dependencies
  • 5.7 Test Case Success/Failure Criteria

49
IEEE 829 Standard Test Plan Outline - 2
  • 6.0 Pass/Fail Criteria for the Complete Test
    Cycle
  • 7.0 Entrance Criteria/Exit Criteria
  • 8.0 Test-Suspension Criteria and Resumption Reqs
  • 9.0 Test Deliverables/Status Communications
    Vehicles
  • 10.0 Testing Tasks
  • 11.0 Hardware and Software Requirements (for
    testing)
  • 12.0 Problem Determination and Correction
    Responsibilities

50
IEEE 829 Standard Test Plan outline - 3
  • 13.0 Staffing and Training Needs/Assignments
  • 14.0 Test Schedules
  • 15.0 Risks and Contingencies
  • 16.0 Approvals

51
Glass-box briefly (Need implementation details)
  • Test module/process/object-cluster interfaces
    (process level can be system test)
  • Test object/object-cluster contracts
  • Create test data to force certain code paths
  • Note if team is doing incremental development,
    you can begin glass-box testing early

52
More to consider
  • If the system is too large to test thoroughly,
    what tests should you emphasize?
  • Stay tuned
Write a Comment
User Comments (0)
About PowerShow.com