A Sizing Framework for DoD Software Cost Analysis Raymond Madachy, NPS Barry Boehm, Brad Clark and Don Reifer, USC Wilson Rosa, AFCAA rjmadach@nps.edu, boehm@usc.edu, brad@software-metrics.com, dreifer@earthlink.net, - PowerPoint PPT Presentation

Loading...

PPT – A Sizing Framework for DoD Software Cost Analysis Raymond Madachy, NPS Barry Boehm, Brad Clark and Don Reifer, USC Wilson Rosa, AFCAA rjmadach@nps.edu, boehm@usc.edu, brad@software-metrics.com, dreifer@earthlink.net, PowerPoint presentation | free to download - id: 5356a5-ZmQ1M



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

A Sizing Framework for DoD Software Cost Analysis Raymond Madachy, NPS Barry Boehm, Brad Clark and Don Reifer, USC Wilson Rosa, AFCAA rjmadach@nps.edu, boehm@usc.edu, brad@software-metrics.com, dreifer@earthlink.net,

Description:

A Sizing Framework for DoD Software Cost Analysis Raymond Madachy, NPS Barry Boehm, Brad Clark and Don Reifer, USC Wilson Rosa, AFCAA rjmadach_at_nps.edu, boehm_at_usc.edu, – PowerPoint PPT presentation

Number of Views:224
Avg rating:3.0/5.0
Slides: 28
Provided by: esuh1
Learn more at: http://csse.usc.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: A Sizing Framework for DoD Software Cost Analysis Raymond Madachy, NPS Barry Boehm, Brad Clark and Don Reifer, USC Wilson Rosa, AFCAA rjmadach@nps.edu, boehm@usc.edu, brad@software-metrics.com, dreifer@earthlink.net,


1
A Sizing Framework for DoD Software Cost
Analysis Raymond Madachy, NPSBarry Boehm,
Brad Clark and Don Reifer, USCWilson Rosa,
AFCAA rjmadach_at_nps.edu, boehm_at_usc.edu,
brad_at_software-metrics.com, dreifer_at_earthlink.net,
wilson.rosa_at_pentagon.af.mil
  • 24th International Forum on COCOMO and
    Systems/Software Cost ModelingNovember 2, 2009

2
Agenda
  • Project Overview (Dr. Wilson Rosa)
  • Data Analysis
  • Software Sizing
  • Conclusions

3
Project Background
  • Goal is to improve the quality and consistency of
    estimating methods across cost agencies and
    program offices through guidance,
    standardization, and knowledge sharing.
  • Project led by the Air Force Cost Analysis Agency
    (AFCAA) working with service cost agencies, and
    assisted by University of Southern California and
    Naval Postgraduate School
  • We will publish the AFCAA Software Cost
    Estimation Metrics Manual to help analysts and
    decision makers develop accurate, easy and quick
    software cost estimates for avionics, space,
    ground, and shipboard platforms.

4
Stakeholder Communities
  • Research is collaborative across heterogeneous
    stakeholder communities who have helped us in
    refining our data definition framework, domain
    taxonomy and providing us project data.
  • Government agencies
  • Tool Vendors
  • Industry
  • Academia

SLIM-Estimate
TruePlanning by PRICE Systems
4
5
Research Objectives
  • Establish a robust and cost effective software
    metrics collection process and knowledge base
    that supports the data needs of the United States
    Department of Defense (DoD)
  • Enhance the utility of the collected data to
    program oversight and management
  • Support academic and commercial research into
    improved cost estimation of future DoD
    software-intensive systems

6
Software Cost Model Calibration
  • Most program offices and support contractors rely
    heavily on software cost models
  • May have not been calibrated with most recent DoD
    data
  • Calibration with recent data (2002-Present) will
    help increase program office estimating accuracy

7
AFCAA Software Cost Estimation Metrics Manual
Table of Contents
Chapter 1 Software Estimation Principles Chapter
2 Product Sizing Chapter 3 Product
Growth Chapter 4 Effective SLOC Chapter 5
Historical Productivity Chapter 6 Model
Calibration Chapter 7 Calibrated
SLIM-ESTIMATE Chapter 8 Cost Risk and
Uncertainty Metrics Chapter 9 Data Normalization
Chapter 10 Software Resource Data
Report Chapter 11 Software Maintenance Chapter
12 Lessons Learned
8
Manual Special Features
  • Augment NCCA/AFCAA Software Cost Handbook
  • Default Equivalent Size Inputs (DM, CM, IM, SU,
    AA, UNFM)
  • Productivity Benchmarks by Operating Environment,
    Application Domain, and Software Size
  • Empirical Code, Effort, and Schedule Growth
    Measures derived from SRDRs
  • Empirically Based Cost Risk and Uncertainty
    Analysis Metrics
  • Calibrated SLIM-Estimate using most recent SRDR
    data
  • Mapping between COCOMO, SEER, True S cost drivers
  • Empirical Dataset for COCOMO, True S, and SEER
    Calibration
  • Software Maintenance Parameters

9
Manual Special Features (Cont.)
  • Guidelines for reconciling inconsistent data
  • Standard Definitions (Application Domain, SLOC,
    etc.)
  • Address issues related to incremental development
    (overlaps, early-increment breakage, integration
    complexity growth, deleted software, relations to
    maintenance) and version management (a form of
    product line development and evolution).
  • Impact of Next Generation Paradigms Model
    Driven Architecture, Net-Centricity, Systems of
    Systems, etc.

10
Agenda
  • Project Overview (Dr. Wilson Rosa)
  • Data Analysis
  • Software Sizing
  • Conclusions

11
DoD Empirical Data
  • Data quality and standardization issues
  • No reporting of Equivalent Size Inputs CM, DM,
    IM, SU, AA, UNFM, Type
  • No common SLOC reporting logical, physical,
    etc.
  • No standard definitions Application Domain,
    Build, Increment, Spiral,
  • No common effort reporting analysis, design,
    code, test, CM, QA,
  • No common code counting tool
  • Product size only reported in lines of code
  • No reporting of quality measures defect
    density, defect containment, etc.
  • Limited empirical research within DoD on other
    contributors to productivity besides effort and
    size
  • Operating Environment, Application Domain, and
    Product Complexity
  • Personnel Capability
  • Required Reliability
  • Quality Defect Density, Defect Containment
  • Integrating code from previous deliveries
    Builds, Spirals, Increments, etc.
  • Converting to Equivalent SLOC
  • Categories like Modified, Reused, Adopted,
    Managed, and Used add no value unless they
    translate into single or unique narrow ranges of
    DM, CM, and IM parameter values. We have seen no
    empirical evidence that they do

12
SRDR Data Source
13
Data Collection and Analysis
  • Approach
  • Be sensitive to the application domain
  • Embrace the full life cycle and Incremental
    Commitment Model
  • Be able to collect data by phase, project and/or
    build or increment
  • Items to collect
  • SLOC reporting logical, physical, NCSS, etc.
  • Requirements Volatility and Reuse
  • Modified or Adopted using DM, CM, IM SU, UNFM as
    appropriate
  • Definitions for Application Types, Development
    Phase, Lifecycle Model,
  • Effort reporting phase and activity
  • Quality measures defects, MTBF, etc.

14
Data Normalization Strategy
  • Interview program offices and developers to
    obtain additional information not captured in
    SRDRs
  • Modification Type auto generated, re-hosted,
    translated, modified
  • Source in-house, third party, Prior Build,
    Prior Spiral, etc.
  • Degree-of-Modification DM, CM, IM SU, UNFM
    as appropriate
  • Requirements Volatility -- of ESLOC reworked or
    deleted due to requirements volatility
  • Method Model Driven Architecture,
    Object-Oriented, Traditional
  • Cost Model Parameters True S, SEER, COCOMO

15
Agenda
  • Project Overview (Dr. Wilson Rosa)
  • Data Analysis
  • Software Sizing
  • Conclusions

16
Size Issues and Definitions
  • An accurate size estimate is the most important
    input to parametric cost models.
  • Desire consistent size definitions and
    measurements across different models and
    programming languages
  • The sizing chapter addresses these
  • Common size measures defined and interpreted for
    all the models
  • Guidelines for estimating software size
  • Guidelines to convert size inputs between models
    so projects can be represented in in a consistent
    manner
  • Using Source Lines of Code (SLOC) as common
    measure
  • Logical source statements consisting of data
    declarations executables
  • Rules for considering statement type, how
    produced, origin, build, etc.
  • Providing automated code counting tools adhering
    to definition
  • Providing conversion guidelines for physical
    statements
  • Addressing other size units such as requirements,
    use cases, etc.

17
Sizing Framework Elements
  • Core software size type definitions
  • Standardized data collection definitions
  • Measurements will be invariant across cost models
    and data collections venues
  • Project data normalized to these definitions
  • Translation tables for non-compliant data sources
  • SLOC definition and inclusion rules
  • Equivalent SLOC parameters
  • Cost model Rosetta Stone size translations
  • Other size unit conversions (e.g. function
    points, use cases, requirements)

18
Core Software Size Types
19
Equivalent SLOC A User Perspective
  • Equivalent A way of accounting for relative
    work done to generate software relative to the
    code-counted size of the delivered software
  • Source lines of code The number of logical
    statements prepared by the developer and used to
    generate the executing code
  • Usual Third Generation Language (C, Java) count
    logical 3GL statements
  • For Model-driven, Very High Level Language, or
    Macro-based development count statements that
    generate customary 3GL code
  • For maintenance above the 3GL level count the
    generator statements
  • For maintenance at the 3GL level count the
    generated 3GL statements
  • Two primary effects Volatility and Reuse
  • Volatility of ESLOC reworked or deleted due to
    requirements volatility
  • Reuse either with modification (modified) or
    without modification (adopted)
  • Stutzke, Richard D, Estimating
    Software-Intensive Systems, Upper Saddle
    River, N.J. Addison Wesley, 2005

20
Adapted Software Parameters
  • For adapted software, apply the parameters
  • DM of design modified
  • CM of code modified
  • IM of integration required compared to
    integrating new code
  • Normal Reuse Adjustment Factor RAF 0.4DM
    0.3CM 0.3IM
  • Reused software has DM CM 0.
  • Modified software has CM gt 0. Since data
    indicates that the RAF factor tends to
    underestimate modification effort due to added
    software understanding effects, two other factors
    are used
  • Software Understandability (SU) How
    understandable is the software to be modified?
  • Unfamiliarity (UNFM) How unfamiliar with the
    software to be modified is the person modifying
    it?

21
SLOC Inclusion Rules
22
Equivalent SLOC Rules
Equivalent SLOC Rules for Development
Equivalent SLOC Rules for Maintenance
Source Includes Excludes
New ü
Reused ü
Modified ü
Generated
Generator statements ü
3GL generated statements ü
Converted ü
COTS ü
Volatility ü

How Produced in Development or Source Includes Excludes
New ü
Reused ü
Modified ü
Generated
Generator statements ü (if 3GL generated statements not modified in development) ü (if 3GL generated statements modified in development)
3GL generated statements ü (if modified in development) ü (if not modified in development)
Converted ü
COTS ü
Volatility ü

23
Cost Model Size Inputs
24
Sizing Chapter Current Outline
25
Agenda
  • Project Overview (Dr. Wilson Rosa)
  • Data Analysis
  • Software Sizing
  • Conclusions

26
Concluding Remarks
  • Goal is to publish a manual to help analysts
    develop quick software estimates using empirical
    metrics from recent programs
  • Additional information is crucial for improving
    data quality across DoD
  • We want your input on Productivity Domains and
    Data Definitions
  • Looking for collaborators
  • Looking for peer-reviewers
  • Need more data

27
References
  • United States Department of Defense (DoD),
    Instruction 5000.2, Operation of the Defense
    Acquisition System, December 2008.
  • W. Rosa, B. Clark, R. Madachy, D. Reifer, and B.
    Boehm, Software Cost Metrics Manual,
    Proceedings of the 42nd Department of Defense
    Cost Analysis Symposium, February 2009.
  • B. Boehm, Future Challenges for Systems and
    Software Cost Estimation, Proceedings of the
    13th Annual Practical Software and Systems
    Measurement Users Group Conference, June 2009.
  • B. Boehm, C. Abts, W. Brown, S. Chulani, B.
    Clark, E. Horowitz, R. Madachy, D. Reifer, and B.
    Steece, Software Cost Estimation with COCOMO II,
    Upper Saddle River, NJ Prentice-Hall, 2000.
  • R. Stutzke, Estimating Software-Intensive
    Systems, Upper Saddle River, NJ Addison Wesley,
    2005.
  • Madachy R, Boehm B, Comparative Analysis of
    COCOMO II, SEER-SEM and True-S Software Cost
    Models, USC-CSSE-2008-816, University of
    Southern California Center for Systems and
    Software Engineering, 2008.
About PowerShow.com