Grading Effectiveness of Software and Processes what makes good software Or, what makes software goo - PowerPoint PPT Presentation

Loading...

PPT – Grading Effectiveness of Software and Processes what makes good software Or, what makes software goo PowerPoint presentation | free to view - id: 12d674-ZjE4O



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Grading Effectiveness of Software and Processes what makes good software Or, what makes software goo

Description:

An organization in Chaos will judge software subjectively by Grade. ... The 'subjectiveness' of evaluation by grade begin to be criticized. ... – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 48
Provided by: cseBu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Grading Effectiveness of Software and Processes what makes good software Or, what makes software goo


1
Grading Effectiveness of Software and
Processes- what makes good software? Or, what
makes software good?
2
Reading
  • On the web site
  • CMM.pdf  - Capability Maturity Model (download
    the PDF file)
  • READ EVERY SLIDE IN THIS LECTURE

3
SW Failures show themselves
  • Unintended effects
  • Over budget
  • Exceeded schedule
  • Under spec.
  • Unsafe
  • Undebuggable
  • Unenhanceable
  • Unmaintainable
  • Performance varies and is unmeasurable
  • Un-decipherable in theory of operation
  • Works under too-narrow a range of conditions
  • User difficulties
  • Unable to dependably reconstruct an executable

4
Failure Types
  • Errors in specification
  • Errors in design
  • Errors in implementation
  • Errors in measurement
  • Errors in customer expectation
  • Errors in administration documentation and
    configuration management

5
The government
  • In 1990, concerned over the high cost and low
    quality of SW, the U.S. teamed with Carnegie
    Mellon University to form the Software
    Engineering Institute
  • Developed a criteria with which to judge the
    traits exhibited by SW companies.
  • Called the Capability Maturity Model

6
SEI CMM Levels in practical terms
  • I - Ad hoc (Chaos) - hero-based, unrepeatable
  • II- Software Project Management - schedules and
    critical paths
  • III- Sanctioned Software Engineering -
    architecture and lifecycle stages
  • IV- Measurement
  • V- Closed-loop (measurement improves process)

7
Some have added
  • Level 0 - stupidity and self destruction

8
Facts about the Levels
  • Cant start at Level 3
  • Cant skip levels
  • C encourages Level 1 Behavior
  • Ada encourages Level 3 Behavior / Ada is dead
  • Most companies at Level 1
  • CASE tools do not help

9
Level V - 1995
  • 6 companies
  • 1 division of NASA
  • 1 division of Lockheed
  • 1 division of Motorola
  • 3 in India

10
Level V - 1999
  • 7 companies
  • 1 division of NASA
  • 1 division of Lockheed
  • 1 division on Motorola
  • 3 in India
  • 1 in Japan

11
Level V - 2003
  • 53 companies

12
SW Effectivness how do SW developers judge
themselves?
  • 1. Grades software SUBJECTIVE opinions bad,
    but pervasive.
  • 2. Metrics software measurements, whether based
    on subjective opinions or not.
  • Metrics participate heavily in audits.
  • An organization in Chaos will judge software
    subjectively by Grade.

13
McCalls Grades
  • Well-used in the industry for years. Too long, in
    fact.
  • Auditability can it be checked?
  • Accuracy precision of results
  • Commonality use of industry standards
  • Completeness all requirements met?
  • Conciseness no more than needed
  • Consistency company standards used
  • Data Commonality strong data typing

14
  • Error Tolerance ability to recover (completely)
    from an extreme condition, not a graceful
    shutdown
  • Execution Efficiency is it a dog?
  • Expandability at what point does the
    architecture stop supporting changes
  • House of Cards Structural Limit
  • Justification Limit
  • Generality the utility of its components
  • Hardware Independence

15
  • Instrumentation monitoring, diagnostics, and
    annunciation
  • Modularity coupling and cohesion
  • Operability ease of use
  • Security 1 protection of data
  • Security 2 protection of programs
  • Self-documentation in use and design
  • Simplicity clarity, freedom from unneeded
    cleverness

16
  • SW System Independence independance from an
    isolated approach speaks to the likelihood that
    the industry can and will contribute to the
    systems quality
  • Traceability requirements origin of modules
  • Training effort needed to become proficient in
    its theory (not in its use, which is
    Operability)

17
Around 1980
  • The subjectiveness of evaluation by grade begin
    to be criticized.
  • The drive to measure effectiveness and quality
    was at first not much better.

18
McCalls Attributes
  • Whats the difference from grades? You tell me
  • Correctness does it work?
  • Reliability with precision? All the time?
  • Efficiency as well as it could?
  • Integrity is it secure against unintended use?
  • Usability can it be run with no complications

19
  • Maintainability can it be fixed?
  • Flexibility / Enhance-ability can it be
    changed?
  • Testability can its inner-workings be audited?
  • Portability is it useable on another platform
  • Reuseability useable after its first
    deployment?
  • Interoperability interface to another system?

20
Grades vs. Attributes
  • Attributes are collections of grades which
    combine to form more practical criteria.
  • Didnt work, not much better, illustrated the
    general confusion.
  • Not just SW people should care everyone should
    software was starting to enter our lives.

21
People were the problem
  • Again, territoriality. Everyone had a different
    point of view.
  • Project Manager the accountant
  • Systems Engineer problem domain expert
  • Software Project Manager development
    methodology
  • Software Architect doesnt dirty his/her hands
    with code
  • Lead Software Designer coders point of contact
  • Programmers lowest on the food chain

22
Project Planning Grades
  • Scope defined
  • Adequate schedule and budget
  • Adequate resources available and allocated
  • Good basis for estimates (whats been guessed?)
  • Critical Path is real
  • Realistic schedule and budget

23
System Engineering Grades
  • Concerns over the softwares contribution to a
    problem solution
  • Well partitioned HW and SW and environment
  • All interfaces defined
  • Precision and performance bounds reasonable and
    adequate
  • Design constraints in place everything targets
    the solution

24
  • Best solution?
  • Technically Feasible?
  • Mechanisms for validation in place?

25
SEI Grades
  • Engineering principles applied to design
  • Requirements compliance
  • Reviews, walk-thrus
  • Documentation
  • Modularity
  • Coding standards
  • Simplicity and clarity
  • Control over changes
  • Extensive test throughout (not after)

26
SW Design Grades
  • Architecture matches requirements
  • Modularity / cohesion / coupling
  • All module interfaces designed
  • Data dictionary structure, content, type,
    range, defaults, domain, flow
  • Maintainable

27
Implementation Grades
  • Accomplish desired function
  • Consistent with the architecture
  • Clarity, not cleverness or complexity
  • Error handling
  • I/O to the module extensively designed
  • Debuggable, testable, maintainable
  • Design properly translated into code
  • Coding standards used
  • Documented
  • Global TYPES, not variables

28
Maintenance Grades
  • Have side effects of change been considered?
  • RFC / STR been properly documented
  • Relies on the existence of a maintenance procedure

29
Too much confusion
  • Needed a common Assessment
  • To prioritize
  • Not subjective, but measureable.
  • Apply differently to different DEPLOYMENTS

30
Testing
  • two-thirds of all errors occur before coding
  • theoretically, non-execution based tests can
    eliminate 100 of errors
  • pressure to complete is an anti-testing message
  • organizations in Chaos experience
  • failure to define testing objectives
  • testing at the wrong lifecycle phase
  • ineffective testing techniques
  • The biggest problem with testing is boredom

31
Complexity Metrics
  • Measure the complexity of
  • Computation efficiency of the design
  • Psychology what affects programmers
    performance in composing, comprehending, and
    modifying the software
  • Problem how difficult is the problem to be
    solved?
  • Design how complex is the solution?
  • Process development tools, experience, and team
    dynamics

32
SW Engineering Metrics
  • measure
  • Mental capacity needed (volume and difficulty)
  • Language proficiency
  • Experience needed
  • Cost and time for each lifecycle phase
  • Productivity (LOC/person/day)

33
Safety Metrics
  • measure
  • Deaths per 1000
  • Extent and characteristic of people contact
  • Mechanical vs. Electrical vs. SW Control
    (technology scale)
  • Worst case failure
  • Probability of worst case failure

34
Lifecycle metrics
  • measure
  • Improperly interpreted requirements
  • User-specified errors
  • Requirements improperly transcribed
  • Analysis omissions (hard to measure)
  • Req-to-Design, and Design-to-Code errors
  • Coding errors (especially boundary values)
  • of system recompilations (a 70s thing)
  • Data typing errors

35
Testing Metrics
  • measure
  • Testing plan and procedure error count
  • Testing in wrong lifecycle phase
  • Testing setup error count
  • Error characterization error count
  • Error correction error count
  • of corrected condition injected mistakes

36
Cost Metrics
  • measure
  • Labor hours
  • Money
  • Delay in delivery (calendar)
  • Turn over and burnout

37
Reliability Metrics
  • MTBF mean time between failures
  • MTTR mean time to repair
  • Total Time MTBF MTTR
  • Availability MTBF / Total Time as a percentage
    the chance that its operating
  • Hard failures requiring restart

38
Relevance Metrics
  • In the companys product line
  • In the companys business plan
  • Evolution or revision of current product

39
General form of a metric - everything you need to
know about a measurement
  • Factor reliability
  • Characteristic availability
  • Measurement hrs. available / total hrs.
  • Impact production halted
  • Tolerance - lt 1 unavailabilty
  • Probability 80
  • Risk High
  • Candidate for Test Yes
  • Probable Error Lifecycle Stage
  • Spec Design

40
lets do one for safety
  • Factor safety
  • Characteristic Unscheduled landings
  • Measurement lost planes
  • Impact reduced revenues
  • Tolerance - 0
  • Probability (if you dont test) 50
  • Risk medium risk
  • Candidate for Test Yes
  • Probable Error Lifecycle Stage
    All

41
Educational Software
  • Factor learning and education
  • Characteristic retention
  • Measurement test scores
  • Impact no change or reduced test scores
  • Tolerance - gt10 over 6 mos.
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

42
Educational Software
  • Factor cause and effect
  • Characteristic choice making
  • Measurement improved response time
  • Impact lack of progress
  • Tolerance (willing to accept) - gt10 over
    6 mos.
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

43
Medical (Smartpill) Software
  • Factor Reliability
  • Characteristic Accuracy
  • Measurement comparison to a gold standard
  • Impact unusable results
  • Tolerance - lt.01
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

44
Financial Software
  • Factor correctness
  • Characteristic compliant tax returns
  • Measurement number of IRS audits
  • Impact fines and imprisonment
  • Tolerance 1 out of 10,000
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

45
Financial Software
  • Factor Accuracy
  • Characteristic Reasonableness of Tax
    Returns
  • Measurement Number of Tax Audits
  • Impact Fines or Imprisonment
  • Tolerance - lt1,000 per year
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

46
Video Game Software
  • Factor immersion
  • Characteristic loss of productivity
  • Measurement assignments missed
  • Impact
  • Tolerance - 0
  • Probability (if you dont test)
  • Risk
  • Candidate for Test
  • Probable Error Lifecycle Stage

47
so what is important about your project?
  • cost? safety? ease of use? reliability? ubiquity?
    market share? high tech? low tech? backward
    compatibility? industry standard compliance?
About PowerShow.com