Evaluation of Information Systems Fundamental Measures - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

Evaluation of Information Systems Fundamental Measures

Description:

Juran 'fitness for use' ... Logical Internal Files: store information for an application that generates, ... Management information source for software ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 60
Provided by: gle9
Category:

less

Transcript and Presenter's Notes

Title: Evaluation of Information Systems Fundamental Measures


1
Evaluation of Information SystemsFundamental
Measures
  • INFO 630
  • Glenn Booker

2
Quality
  • A common objective for measuring creation of
    something is to ensure it has quality
  • The definition of quality is often somewhat
    subjective, or based on limited personal
    experience

3
Quality
  • More formal definitions include
  • Crosby conformance to requirements
  • Juran fitness for use
  • Both of these address a similar theme, i.e. it
    can do what its supposed to do

4
Quality
  • These definitions imply that the customer defines
    quality, not the developer
  • Another set of quality definitions are
  • Small q related to the lack of defects and
    reliability of a product
  • Big Q has small q, plus customer satisfaction
    and process quality

5
Software Quality
  • Software quality typically includes many of these
    aspects
  • Conformance to requirements
  • Lack of defects
  • Customer satisfaction
  • Later well look at more specific aspects of
    customer satisfaction, using the CUPRIMDA measures

6
What is the perfect vehicle for
  • Going 200 mph at LeMans?
  • Driving through deep mud and snow?
  • Carrying a Boy Scout troop to a ball game?
  • Carrying sheets of plywood to a construction
    site?
  • Feeling the wind in your hair?
  • Towing an enormous trailer?

7
The Right Metrics
  • Asking what should I measure? is like asking
    what vehicle should I buy? the answer varies
    wildly, depending on your needs
  • Hence a major portion of this course is
    describing typical metrics for a wide range of
    needs

8
Product Metrics
  • Size (Lines of Code, Function Points)
  • Size (memory or storage needs - particularly for
    embedded software)
  • Complexity (within or among modules)
  • Number of modules, classes, tables, queries,
    screens, inputs, outputs, interfaces, etc.

9
Process Metrics
  • Is the process being followed?
  • When in doubt, define a plan of action - then
    measure progress against the following the plan
  • Are milestones being met?
  • Are tasks starting on time? Ending?
  • Are people participating in the processes?

10
Process Metrics
  • Are the processes effective and productive?
  • Do the processes meet quality standards?

11
Resource Metrics
  • Do effort, cost, and schedule meet the plans for
    same?
  • Are needed people available? Is the project fully
    staffed?
  • Are the staff qualified to perform their duties?
  • Is training adequate to meet project needs?

12
Tool Metrics
  • How much (time, labor effort, cost) did our tools
    cost? Does that include training?
  • How much are the tools being used?
  • Are the tools meeting our needs?
  • Are they providing unexpected benefits?
  • How has use of the tools affected our
    productivity? Rework? Defect rate?

13
Testing Metrics
  • Effort expended on testing
  • Number of test cases developed, run, and passed
  • Test coverage ( of possible paths)
  • Test until desired quality achieved (defect
    rate)
  • Number of defects remaining (by severity)

14
Initial Core Measurements
  • Size - how large is the software?
  • Effort and cost - how much staff time or dollars
    to build software?
  • Schedule - over what calendar time period?
  • Problems and defects - how good is the process,
    based on the quality of the resulting product?

15
Why Measure Software Size?
  • Project planning need inputs for prediction
    models
  • Effort and cost are a function of size, etc.
  • Project management - monitor progress during
    development
  • Plot planned vs. actual lines of code over time
  • Process improvement - normalizing factor
  • Programmer productivity, defect density

16
Program Size
  • Lines of code or function points are used as
    measures of program size
  • Function points also used in project management
    course (INFO 638)
  • Lines of code
  • Source statements usually implies logical lines
  • Count the delimiters for a language, such as

17
Program Size
  • Source LOC (SLOC) refers to physical or logical
    lines of code
  • Many different ways to decide what counts as a
    LOC and what doesnt count
  • See LOC handout, Lines of Code Definition
    Options

18
Problems With LOC
  • Differences in counting standards across
    organizations
  • Physical lines
  • Logical lines
  • Variations within each standard

19
Problems With LOC
  • No consistent method for normalizing across
    languages - penalizes high-level languages
  • More than half the effort of development is
    devoted to non-coding work

20
Problems With LOC
  • Moving from a low level language to a high level
    language reduces amount of code
  • Non-coding work acts as a fixed cost driving up
    cost per line of code

21
Examples Illustrating LOC
  • Project with 10,000 SLOC (assembly language)
  • Front-end work 5 months coding 10 months
  • Total project time 15 months at 5,000 per
    staff month
  • Cost 15 months x 5,000 75,000
  • 7.50 per LOC

22
Examples Illustrating LOC
  • Project with 2,000 SLOC (Ada83)
  • Front end work 5 months coding 2 months
  • Total project time 7 months at 5,000 per staff
    month
  • Cost 7 months x 5,000 35,000
  • 17.50 per LOC

23
Halstead Metrics for Size
  • Tokens are either operators (, -) or operands
    (variables)
  • Counts
  • Total number of operator tokens used, N1
  • Total number of operand tokens used, N2
  • Number of unique operator tokens, n1
  • Number of unique operand tokens, n2

24
Halstead Metrics for Size
  • Derive metrics such as Vocabulary, Length,
    Volume, Difficulty, etc.
  • Measure of program size
  • N N1 N2 n1log2 (n1) n2log2 (n2)
  • Relation to Zipfs law in library science
  • Most frequently used words in a document drop off
    logarithmically

25
Function Points
  • Function Points measure software size by
    quantifying functionality provided to user
  • Objectives of function point counting
  • Measure functionality that user requests and
    receives
  • Measure software development and maintenance
    independently of implementation technology
  • Provide a consistent measure among various
    projects and organizations

26
Why Function Points?
  • SLOC gives no indication of functionality
  • Some languages (such as COBOL) are more verbose
    than others (such as C or Pascal) or 4GL database
    languages (e.g. MS Access, Powerbuilder, 4th
    Dimension)
  • Function points are independent of language and
    number of SLOC

27
Why Function Points?
  • SLOC can actually be measured only when coding
    is complete
  • Using function points, we can evaluate software
    early in the life cycle

28
Function Points Count
  • External Inputs inputs from the keyboard,
    communication lines, tapes, touchscreen, etc.
  • External Outputs reports and messages sent to
    user or another application reports may go to
    screen, printer or other applications

29
Function Points Count
  • External Queries queries from users or
    applications which read a database but do not
    add, change or delete records
  • Logical Internal Files store information for an
    application that generates, uses and maintains
    the data
  • External Interface Files contain data or control
    information passed from, passed to or shared by
    another application

30
Measuring Productivity Using Function Points
  • Productivity measure SLOC/effort not a good
    measure when comparing across languages
  • Productivity using function points
    isProductivity Total function points
    Effort in staff hours (months)

31
Cost-Effort Relationship
  • Some number of people are working on a project
    their hourly rate, times the number of hours
    worked, is the labor cost for that project Labor
    Cost Sum of (ratehours)
  • Most models for effort, cost, and schedule are
    based on the size of the product

32
Size-Duration Relationship
  • Projects tend to have a typical relationship
    between their effort and their overall duration
  • This produces a typical range of durations for a
    given size project
  • The effort and duration imply how many people
    will be needed on average for the project

33
Software Effort
  • Measure effort for
  • Total software project (the big picture)
  • Life-cycle phase (useful for planning)
  • Each configuration item or subsystem (helps
    improve planning)
  • Fixing a software problem or enhancing software
    (maintenance)

34
Software Development Models
  • Used to estimate the amount of effort, cost, and
    calendar time required to develop a software
    product
  • Basic COCOMO (Constructive Cost Model)
  • Intermediate COCOMO
  • COCOMO II
  • Non-proprietary model first introduced by Barry
    W. Boehm (at USC) in 1981
  • Most popular cost estimation model in 1997

35
Software Cost Models
  • COCOMO II is within 20 of actual project data
    just under 50 of the time, based on 83
    calibration points (projects)
  • SLIM
  • COCOTS
  • Also developed by USC
  • Used for estimating commercial-off-the-shelf
    software (COTS)-based systems
  • Still experimental

36
COCOMO Effort Equations
Shows the software size-effort relationship as
it has evolved through various versions of COCOMO
37
COTS
  • COTS Commercial off-the-shelf software
  • Assessment
  • Activity of determining feasibility of using COTS
    components for a larger system
  • Tailoring
  • Activity performed to prepare COTS program for
    use
  • Glue code development
  • New code external to COTS that must be written to
    plug component into the larger system

38
COTS
  • Volatility
  • Refers to frequency with new versions or updates
    of the COTS software are released by the vendors
    over the course of the system development

39
COTS Problems
  • No control over a COTS products functionality or
    performance
  • Most COTS products are not designed to work with
    each other
  • No control over COTS product evolution
  • COTS vendor behavior varies widely

40
COCOTS
  • COCOTS provides solid framework for estimating
    software COTS integration cost
  • Needs further data, calibration, iteration
  • Current spreadsheet model provided by Boehm could
    be used experimentally
  • COCOTS can be extended to cover other COTS
    related costs
  • Model hasnt been updated recently

41
Software Structure
  • Some legacy terminology for software elements has
    survived
  • Software may have high level CSCIs (computer
    software configuration items)
  • Each CSCI may be broken into CSCs (computer
    software components)
  • Each CSC may be broken into CSUs (computer
    software units), which have one or more modules
    of actual source code
  • This is the origin of the term unit testing

42
Schedule Data
  • Schedule defines, for each task
  • Start date (planned and actual)
  • End date (planned and actual)
  • Duration (planned and actual)
  • Resources needed (i.e. number of people)
  • Dependencies on other tasks

43
Schedule Data
  • Provide project milestones and major events
  • Reviews
  • PDR (preliminary design review)
  • CDR (critical design review)
  • Audits and inspections
  • Release of deliverables
  • Compare milestones planned and actual dates

44
Schedule Data
  • Uses calendar dates
  • Track by configuration item or subsystem
  • Number of CSUs completing unit test
  • Number of SLOC completing unit test
  • Number of CSUs integrated into the system
  • Number of SLOC integrated into the system

45
Tracking Software Problems
  • Software problem data is
  • Management information source for software
    process improvement
  • Analyze effectiveness of prevention, detection,
    and removal process

46
Tracking Software Problems
  • A critical component in establishing software
    quality
  • Number of problems in product
  • If product ready for release to next development
    step or to customer
  • How current version compares in quality to
    previous or competing versions

47
Problem Terminology
  • Error A human mistake resulting in incorrect
    software.
  • Defect An anomaly in the product.
  • Failure When a software product does not perform
    as required.
  • Fault An accidental condition which causes
    system to perform incorrectly.

48
Problem Terminology
  • Bug same as a faultWhat does all this mean?
  • So a human (?) programmer makes an Error,
    resulting in a Defect in the code
  • A user of the code discovers the Defect by
    causing a Failure in normal use, or creates a
    Fault accidentally.

49
Problem Identification
  • Nature of the problem
  • Symptoms
  • Locations of the problem
  • Time
  • When problem was inserted (created)
  • When problem was identified (found) and resolved
    (fixed)
  • Mechanism (how does the problem work)

50
Problem Identification
  • Source (why was it created)
  • Severity (what is its impact on the system)
  • Priority (how soon to fix it)
  • Necessary fixes to correct
  • Cost - to isolate problem, fix, and test
  • End result (was it fixed?)

51
Source of Problem
  • Requirements incorrect or misinterpreted
  • Functional specification incorrect or
    misinterpreted
  • Design error that spans several modules
  • Design error or implementation error in a single
    module

52
Source of Problem
  • Misunderstanding of external environment
  • Error in use of programming language or compiler
  • Clerical error
  • Error due to a mis-correction of a previous
    defect (a bad fix)

53
Defect Density -- A Measure of Quality
54
Defects
  • Defects vs. faults
  • Defects and faults often treated synonymously
    especially during operational testing
  • Defects during
  • Development activities (requirements to coding)
  • Unit testing
  • Integration testing
  • Operational testing
  • Post-implementation deployment

55
Defects
  • Defect counts used in quality measures
  • Number of defects during operational testing
  • Number of defects after deployment
  • Latent (residual) defects
  • Defects that remain uncovered after delivery
  • May be estimated by some defect prediction model

56
Phase-Based Defect Removal Pattern
  • Count defects discovered in each phase of the
    life cycle

HLDR - high level design reviewLLDR - low level
design reviewCI - code inspectionUT - unit
testingCT - component testingST - system testing
57
Defect Removal Effectiveness
HLDR - high level design reviewLLDR - low level
design reviewCI - code inspectionUT - unit
testingCT - component testingST - system testing
58
Maintainability
  • Maintainability of software is the ease with
    which software can be understood, corrected,
    adapted, and/or enhanced
  • Clear documentation of the product and why it
    works the way it does are needed to help ensure
    maintainability
  • Documenting decisions affecting the product
    design are also important

59
Maintainability
  • Classification of maintenance activity
  • Corrective maintenance - defect finding and
    fixing
  • Adaptive maintenance - modifying software to
    interface properly with a changing environment
  • Perfective maintenance - adding new
    functionality to a working successful piece of
    software
Write a Comment
User Comments (0)
About PowerShow.com