SE Principles Silver Bullets - PowerPoint PPT Presentation

About This Presentation

SE Principles Silver Bullets


SE Principles Silver Bullets CS 620/720 Software Engineering January 15, 2004 – PowerPoint PPT presentation

Number of Views:466
Avg rating:3.0/5.0
Slides: 570
Provided by: JeffG166


Transcript and Presenter's Notes

Title: SE Principles Silver Bullets

SE PrinciplesSilver Bullets
  • CS 620/720
  • Software Engineering
  • January 15, 2004

Poor Engineering leads to ad-hoc structure!
The result of continuous building without any
thought toward design.
Result Stairs leading to ceiling Windows
in the middle of room Doors opening to wall
Non-intuitive floor plan!.
Poor Engineering Has Disastrous Consequences!
Aerodynamic phenomena in suspension bridges were
not adequately understood in the profession nor
had they been addressed in this design. New
research was necessary to understand and predict
these forces. The remains, located on the bottom
of the Sound, are a permanent record of man's
capacity to build structures without fully
understanding the implications of the design.
Poor Engineering Has Disastrous Consequences!7
Billion Fire Works One Bug, One Crash
On 4 June 1996, the maiden flight of the Ariane 5
launcher ended in a failure. Only about 40
seconds after initiation of the flight sequence,
at an altitude of about 3700 m, the launcher
veered off its flight path, broke up and
exploded. The failure of the Ariane 501 was
caused by the complete loss of guidance and
attitude information 37 seconds after start of
the main engine ignition sequence (30 seconds
after lift- off). This loss of information was
due to specification and design errors in the
software of the inertial reference system.
The launcher started to disintegrate at about H0
39 seconds because of high aerodynamic loads
due to an angle of attack of more than 20 degrees
that led to separation of the boosters from the
main stage, in turn triggering the self-destruct
system of the launcher. This angle of attack was
caused by full nozzle deflections of the solid
boosters and the Vulcain main engine. These
nozzle deflections were commanded by the On-Board
Computer (OBC) software on the basis of data
transmitted by the active Inertial Reference
System (SRI 2). Part of these data at that time
did not contain proper flight data, but showed a
diagnostic bit pattern of the computer of the SRI
2, which was interpreted as flight data. The
reason why the active SRI 2 did not send correct
attitude data was that the unit had declared a
failure due to a software exception. The OBC
could not switch to the back-up SRI 1 because
that unit had already ceased to function during
the previous data cycle (72 milliseconds period)
for the same reason as SRI 2. The internal SRI
software exception was caused during execution of
a data conversion from 64-bit floating point to
16-bit signed integer value. The floating point
number which was converted had a value greater
than what could be represented by a 16-bit signed
integer. This resulted in an Operand Error. The
data conversion instructions (in Ada code) were
not protected from causing an Operand Error,
although other conversions of comparable
variables in the same place in the code were
protected. The error occurred in a part of the
software that only performs alignment of the
strap-down inertial platform. This software
module computes meaningful results only before
lift-off. As soon as the launcher lifts off, this
function serves no purpose. The alignment
function is operative for 50 seconds after
starting of the Flight Mode of the SRIs which
occurs at H0 - 3 seconds for Ariane 5.
Consequently, when lift-off occurs, the function
continues for approx. 40 seconds of flight. This
time sequence is based on a requirement of Ariane
4 and is not required for Ariane 5. The Operand
Error occurred due to an unexpected high value of
an internal alignment function result called BH,
Horizontal Bias, related to the horizontal
velocity sensed by the platform. This value is
calculated as an indicator for alignment
precision over time. The value of BH was much
higher than expected because the early part of
the trajectory of Ariane 5 differs from that of
Ariane 4 and results in considerably higher
horizontal velocity values.
http// http/
/ http//archive.eiffel.
m l
Poor Engineering Has Disastrous Consequences!
  • Software Runaways Lessons Learned from Massive
    Software Failures, Robert Glass
  • Denver Airport
  • Safeware System Safety and Computers, Nancy
  • Therac-25
  • http//
  • Risks to the Public, Peter Neumann
  • http//

Poor Engineering Has Disastrous Consequences!
  • Can you think of other examples.?

1968 Birth of Software Engineering
  • 1968 NATO Convention on new field of Software
  • http//
  • Virtual Whos Who
  • Dijkstra, Naur, Perlis, Gries

First Software Reuse paper
  • Doug McIlroy

1968 NATO SE
  • The conference must have been tiring.

No Silver Bullet Essence and Accident in
Software Engineering
  • IEEE Computer, April 1987
  • Brooks was the 2000 Turing Award winner
  • Brooks Law
  • Mythical Man Month
  • UNC

Silver Bullets
  • But as we look to the horizon of a decade hence,
    we see no silver bullet. There is no single
    development, either in technology or management
    technique, which by itself promises even one
    order of magnitude improvement in productivity,
    in reliability, in simplicity. Not only are there
    no silver bullets in view, the very nature of
    software makes it unlikely there will be any.
  • Frederick Brooks, The Mythical Man Month

No magical cure for software crisis
In a nutshell
  • I believe the hard part of building software to
    be the specification, design, and testing of this
    conceptual construct, not the labour of
    representing it and testing the fidelity of the

SE A Disciplined ApproachKill the Snake-Oil
  • The first step toward the management of disease
    was replacement of demon theories and humours
    theories by the germ theory. That very step, the
    beginning of hope, in itself dashed all hopes of
    magical solutions. It told workers that progress
    would be made stepwise, at great effort, and that
    a persistent, unremitting care would have to be
    paid to a discipline of cleanliness. So it is
    with software engineering today.

Essence and accident
  • All software construction involves
  • essential tasks the fashioning of the complex
    conceptual structures that compose the abstract
    software entity i.e., problem-space, and
  • accidental tasks the representation of the
    abstract entities in programming languages and
    the mapping of these onto machine languages
    within space and time constraints.
  • Distinction originally due to Aristotle.

Brooks thesis
  • Good news!
  • We have made great headway in solving the
    accidental problems! Orders of magnitude
  • Bad news!
  • Progress on essential problems will be much
    slower going.
  • There is no royal road, but there is a road.

No Silver Bullet Fred Brooks
  • Like physical hardware limits, there are problems
    w/ SW that wont be solved
  • Inherent difficulties of software production
  • complexity
  • conformity
  • changeability
  • invisibility

  • Software is complex, even when done right.
  • Many components
  • Many kinds of interactions between components
  • Combinatorially many states, syntax must be just
    so or else.
  • 16 bit word in HW -gt 216 states
  • Def-use of variables and state changes
  • Rich structure, interesting dependencies add to
  • Good interfaces, design can lessen
    externally-visible complexity
  • Accidental (as practised) issues add to
  • Efficient code is usually complicated code
  • Use of prefab abstractions reuse and generic
    components mean complicated, stateful glue.
  • Complex code is harder to evolve, results in
    design drift and even more complexity.

  • Complexity is an inherent property because there
    really are that many moving parts!
  • Hardware engineering is achieved by replication
    of relatively simple parts arranged just so to
    achieve a certain effect.
  • Software is NOT a physical system, it is entirely
    a design.
  • When two pieces of software perform the same
    task, we abstract them into one!
  • Thus the size of a well designed system
    measures not mass but complexity.
  • Complexity depends non-linearly on size
  • Results
  • difficult to understand whole product
  • errors in specification code
  • hard to manage
  • how can you estimate without understanding?
  • maintenance is a nightmare

  • Invariably, have to coerce nice problem-space
    abstractions to fit someone elses prefab
    solution-space technology.
  • New kid on the block! (rules were already made)
  • Perceived as more flexible than hardware or
  • Result bewildering diversity of requirements
  • Wrapping and unwrapping, data marshalling, etc.
  • Software has to be made to agree to common
    interfaces, protocols, standards, etc.
  • Interface with an existing system
  • e.g., plant built asked to create SW to control
  • software must conform to the plant
  • Interface with a new system
  • misperception that SW is easier to conform
  • again, software will be forced to conform
  • Some translations/coercions can be automated and
    take care of themselves it's the other ones that
    will cause our systems to break, often in very
    subtle ways.

  • the software product is embedded in a
    cultural matrix of applications, users, laws, and
    machine vehicles. These all change continually,
    and their changes inexorably force change upon
    the software product.
  • We change software because we can!
  • Easy to update existing systems in the field (in
  • E.g., Windows update web site
  • Can undo changes if desired (in theory).
  • Not like a car recall.
  • To be successful is to be changed!
  • Keep system flexible, marketable.
  • Try to anticipate and/or react to future
    unforeseen uses
  • Pressure to change
  • reality changes
  • useful software will encourage new requests
  • long lifetime (15 yrs) vs. hardware (4 yrs).

  • The reality of software is not inherently
    embedded in space.
  • Software is invisible unvisualizable
  • Different from physical laws and math theorems
  • no way to represent a complete product or
  • complete views (e.g., code)
  • incomprehensible
  • partial views
  • misleading
  • Makes it hard to communicate to
  • other software professionals
  • users clients
  • Of course, we can use various techniques to help
    us visualize aspects of software (control flow,
    data dependencies, UML, etc.)
  • but that's NOT quite the same thing as, say, the
    blueprints of a building.

Attacks on accidental problems
  • HLLs
  • Frees us from byte-level thinking (accidental
    complexity). Takes us up to generic problem
  • e.g., C, sizeof(int), register allocation vs.
    pure objects and garbage collection
  • What we need beyond this is common abstractions
    in problem space
  • i.e., domain modelling telephony, avionics,
    flight reservation
  • and in solution space too
  • e.g., frameworks, libraries, components
  • HLLs and OOP replace Ada by Java
  • Good training/use will make for better systems.
  • When used well, bumps up the abstraction level
  • but still begs the question
  • Won't solve the SE problem, but the abstraction
    aspects of OO live in problem space too.
  • This has been a HUGE win in attacking essential

Attacks on accidental problems
  • IDEs (SDEs)
  • This was novel then! Research systems can
    actually do lots more than VC.
  • Even a smartly tweaked vim/emacs is a big step
  • In the old days, correct and reasonably efficient
    compiling was a notable feat. It was hard to get
    Unix standalone tools to interoperate.
  • Current Eclipse

What about these silver bullets?
  • AI and expert systems
  • Mostly not.
  • Sometimes they help, e.g., test oracles
  • Certainly, AI techniques are useful in many
    application domains, but not as a silver bullet
    to solve the SE problem.
  • WWW, faster processors, cheap memory have made
    some AI techniques quite useful, much more so now
    than then.

What about these silver bullets?
  • Automatic programming
  • i.e., state the parameters, turn the crank, hey
    presto a software system!
  • We can do this now in some cases, works quite
  • it didn't work very well at the time of writing
  • Some systems generate part of their source code
  • Generative programming model-based approaches
  • Will never work completely in the general case,
    but is certainly useful.
  • Really, all this does is bump up the abstraction
    level by one
  • Great potential in domain-specific environments
  • Parsers yacc, lex

What about these silver bullets?
  • Graphical programming e.g., UML, SDL, et al.
  • Undeniably useful as a tool sometimes
  • Always tempting to try, as certain aspects of
    software systems do seem inherently
  • A favourite topic for PhD dissertations in SE.
  • In the general case, software is ethereal,
    unvisualizable (unlike hardware) with truly
    bizarre dependencies.
  • Surprisingly difficult to do well as most visual
    metaphors break down under scale (or bad design).
  • Have you ever looked at a complicated SDL
    state-machine diagram? A class diagram with lots
    of complex interdependencies? A use-case scenario
    diagram with lots of variation?
  • Improvement Domain-specific visual languages

What about these silver bullets?
  • Formal program verification
  • (Dijkstra formal derivation)
  • Does program P implement specification S?
  • Undecidable in the general case, can be done by
    hand until scale overwhelms
  • In practice, has been done with some great
    successes, but only when investment seems to be
  • Tremendously expensive requires expert
  • The hard part, as Brooks rightly points out, is
    making sure you have the right specification S!
  • Validation versus verification

What about these silver bullets?
  • Better tools and faster computers
  • Always good. Usually solution-space though.
  • Technological Peter Principle
  • in a hierarchically structured administration,
    people tend to be promoted up to their "level of
  • Cheap disks, fast modems? Napster!

Promising attacks on conceptual essence
  • Buy, don't build.
  • Old days IBM 1960s specialists developed
    highly customized solutions Just For You.
  • Experience led to generic (or at least
    configurable) products now we have
    shrinkwrapped software (one size fits all).
  • Advice to software developers Learn how to
    build generic components or systems. This is
    hard, though. Modularization (next topic) plays a
    big role.
  • If customization is straightforward, this is a
    huge leap forward and an attack on the conceptual

Promising attacks on conceptual essence
  • Rapid prototyping
  • Take requirements, build mock up, show to user,
    analyze feedback, repeat.
  • Early feedback means less chance for requirements
    errors (which are the most expensive), fast
    turnaround in problem space to narrow
    misunderstandings and educate customer.
  • Requirements are the essence of a software
  • Focusing on getting the requirements right is a
    direct attack on essential problems.
  • Relevance to XP?

Promising attacks on conceptual essence
  • Staged delivery
  • aka organic/incremental development, daily
  • Get a skeleton system up and running ASAP flesh
    it out as you go.
  • Helps to get developers feeling like system is
    real. They add their updates as they finish
    them. They care about them not breaking.
  • Other extreme big bang merging. Often lots of
    unpleasant surprises.
  • Microsoft (and many others) uses this approach
    works well for them.
  • Need a culture that supports it. Lots of running
    around, not afraid of change, group buy-in,
    product-centred, etc.

Promising attacks on conceptual essence
  • Virtuoso designers (a.k.a. Cowboy code-slinger)
  • Find good people and keep them.
  • Pay them well, encourage them.
  • Above all, listen to them.
  • Ours is still a very young field this reliance
    on magic and gurus is worrisome and
  • However, great software design is mostly pure
    design (not engineering) its an act of
    creativity and innovation balanced against
    experience and good engineering. Its impossible
    to teach, per se.
  • Note that great artists still require a
    grounding education in classical techniques and
    exposure to best practices.

Other Promising Technologies?
  • AOP?
  • MIC or MDA?
  • WWW? 1998
  • Extreme programming?
  • design patterns, software architecture?
  • The emergent evolution of hard interfaces
  • e.g., imap, http, tcp/ip
  • Frameworks?
  • EJB, CORBA, COM, componentware?
  • scripting languages?
  • lawsuits?
  • your suggestions??

No Silver Bullet Refired
  • Brooks reflects on the No Silver Bullet paper,
    ten years later
  • Lots of people have argued that their methodology
    is the silver bullet
  • If so, they didnt meet the deadline of 10 years!
  • Other people misunderstood what Brooks calls
    obscure writing
  • For instance, when he said accidental, he did
    not mean occurring by chance

Obtaining the Increase
  • Some people interpreted Brooks as saying
  • that the essence could never be attacked
  • Thats not his point however he said that no
  • single technique could produce an order of
  • magnitude increase by itself
  • He argued that several techniques in tandem
  • could achieve that goal but that requires
  • industry-wide enforcement and discipline

SE Principles - Parnas
  • Parnas starts this paper with a definition of SE
  • multi-person construction of multi-version
  • Do you agree with this definition?

Excludes solo-programming
  • The argument, in some way, makes sense
  • Dividing the job
  • Specifying exact behavior
  • Team communication
  • all of these things contribute to problems that
    need to be solved by applying SE principles.

  • Is it not the case that developing a large
    application, even by yourself, necessitates the
    need for good SE?
  • Do we not need proper modularization when
    developing solo-progams?
  • If building a house relies on good construction
    and engineering skills, is it true that building
    a dog house (solo effort) does not?

Even building a dog house takes some engineering
From http// Ini
tially started as a "basic" dog house but soon
turned into a masterpiece of quality
workmanship.  Total time spent was 8 hours at a
cost of 110 US.  Start with a piece of paper and
a idea  Design your dog house to the size and
quantity of your dogs.  A perfectly built home is
worthless if its to small to properly accommodate
your dog.     Framing  The framing process
should be constructed with 2x4's or rip them in
half for smaller homes.  A removable roof should
be incorporated in assisting the future cleaning
and maintenance.   Wall Covering  Should be
tong grove for a tight fit, no warping, and to
cut down on cross drafts.  For large homes,
plywood is a economical material that can be
used.     Roof  30 year home shingles cut down
to the proper size.  As for this house, an
oriental piece was constructed then topped of
with a copper fence post top.  An additional
hours work and 15 cost was needed   Trim
Finishing Touches  Trim can add a lot to the
astidics of your dog house.  Trim can be bought 
with may different variations or with some
craftsmanshipcan can be made with the use of a
router.   Sanding Paint  Sink all nails
below the surface and cover with wood filler. 
Prepare surface for painting by sanding wood
filler, rough spots, and blemishes.
More SE Principles
  • It seems that the problems identified as 4-6 are
    not peculiar to solo-programming
  • 4. How to write programs that are easily
    modifiable. Programs in which a change in one
    part does not require changes in many other
  • 5. How to write programs with useful subsets.
    Remove uneeded parts
  • E.g., Zen and CORBA footprint
  • 6. How to write programs that are easily extended.

  • The connections between program parts are the
    assumptions that the parts make about each
  • Precursor to Design by Contract Pre/Post
  • the properties that it expects other parts to
  • the system properties that it is required to

  • Parnas writes that these two concerns help when
    we ask
  • What changes can be made to one part without
    involving change to other parts?
  • Or, as Dijkstra has written elsewhere
  • ... program structure should be such as to
    anticipate its adaptations and modifications. Our
    program should not only reflect (by structure)
    our understanding of it, but it should also be
    clear from its structure what sort of adaptations
    can be catered for smoothly. Thank goodness the
    two requirements go hand in hand.

Two Techniques for Controlling Structure
  • Decomposition
  • Technique for dividing systems into modules
  • Well-structured program is one with minimal
    interconnections between its modules
  • More to be said in later lectures
  • Precise Specification
  • precisely describing the assumptions that the
    designers of one module are permitted to make
    about other modules
  • More also to be said on this later
  • Some examples of why it is easier in other
    engineering endeavours

Decomposition and Simple Specification
The prong and receptacle parts of a Lego block
have been unchanged since 1932 Lego, 2002.
Simple Interface Specification
Since around 1850, the standard dimensions for an
air cell masonry brick in the United States has
been 2.5 x 3.75 x 8 inches Chrysler and Escobar,
For next lecture
  • Read chapter 7 of Parnas
  • On the criteria.
  • Perhaps one of the top 5 most cited papers in all
    of software engineering
  • I will also lecture on some things that you do
    not have to read (e.g., cohesion and coupling)
  • Pop quiz some time in semester possible
  • i.e., quiz not on typical Thursday, but Tuesday

A lot of review On the Criteria (hopefully)
  • CS 620/720
  • Software Engineering
  • January 20, 2004

Basic Definitions of SE
  • Software engineering is a discipline whose aim is
    the production of fault-free software, delivered
    on time and within budget, which satisfies the
    users needs Schach

Generic Lifecycle Models
Software Lifecycles
Linear Model
Royce, 1970
Object-Oriented and Classical Software
Engineering Fifth Edition, WCB/McGraw-Hill,
2002Stephen R. Schach
Verification vs validation
Relative Cost Of Software Development Activities
The Cost of Change
More Overview
  • Abstraction, Information Hiding, Encapsulation,
  • all in 15 minutes!!

  • A means of achieving stepwise refinement by
    accentuating relevant details (and, by
    implication, suppressing unnecessary details).
  • Ex. Braking in your car, turning on the lights
  • Other examples?
  • How about examples in the medical field, or other

Abstraction some definitions
  • "A view of a problem that extracts the essential
    information relevant to a particular purpose and
    ignores the remainder of the information."
  • -- IEEE, 1983
  • "The essence of abstraction is to extract
    essential properties while omitting inessential
    details. -- Ross et al, 1975
  • "Abstraction is a process whereby we identify
    the important aspects of a phenomenon and ignore
    its details. -- Ghezzi et al, 1991
  • "Abstraction is generally defined as 'the
    process of formulating generalized concepts by
    extracting common qualities from specific
  • -- Blair et al, 1991
  • "Abstraction is the selective examination of
    certain aspects of a problem. The goal of
    abstraction is to isolate those aspects that are
    important for some purpose and suppress those
    aspects that are unimportant."
  • -- Rumbaugh et al, 1991

Abstraction some definitions
  • "The meaning of abstraction given by the
    Oxford English Dictionary (OED) closest to the
    meaning intended here is 'The act of separating
    in thought'. A better definition might be
    'Representing the essential features of something
    without including background or inessential
  • -- Graham, 1991
  • "A simplified description, or specification,
    of a system that emphasizes some of the system's
    details or properties while suppressing others. A
    good abstraction is one that emphasizes details
    that are significant to the reader or user and
    suppress details that are, at least for the
    moment, immaterial or diversionary."
  • -- Shaw, 1984
  • "An abstraction denotes the essential
    characteristics of an object that distinguish it
    from all other kinds of object and thus provide
    crisply defined conceptual boundaries, relative
    to the perspective of the viewer."
  • -- Booch, 1991

Abstraction my favorite definition
  • Abstraction is doing just what our small minds
    need making it possible for us to think about
    important properties of our program its
    behavior without having to think about the
    entirety of the machinations.
  • Kiczales, 1992

Information Hiding
  • The focus of todays paper
  • Hides the implementation details from other

"The second decomposition was made using
'information hiding ... as a criterion. The
modules no longer correspond to steps in the
processing. ... Every module in the second
decomposition is characterized by its knowledge
of a design decision which it hides from all
others. Its interface or definition was chosen to
reveal as little as possible about its inner
workings." -- Parnas, 1972b "... the purpose
of hiding is to make inaccessible certain details
that should not affect other parts of a
system." -- Ross et al, 1975
Im sure glad I dont have to eat this stuff!
Looks Yummy!
The Restaurant
Messages invoke methods methods send messages.
Turnstile (object) Data tickets 1
Methods isTicketReady add Ticket remove
Cook (object) Data name Arnold
specialties HamandEggs Pancakes
FrenchToast Private Methods makeHamandEggs
makePancakes makeFrenchToast Public
Methods takeTicketFromTurnstile
Waiter (object) Data name Joe tables
1,2 tickets 2 Methods takeOrder
putOrderonTurnstile pickup Order serverOrder
Counter (object) Data ordersAvailable
Methods isOrderReady addOrder removeOrder
Abstraction vs. Information Hiding
  • However, abstraction ltgt information hiding
  • It is possible to hide implementation details,
    yet provide a very poor interface into the module
    such that its key elements are still not easy to
  • Abstraction is about providing a representation
    of some thing which highlights that things
    essential elements

  • The gathering together into one unit of all
    aspects of the real-world entity modeled by the
    abstract data unit.
  • Definitions

"to enclose in or as if in a capsule -- Mish,
1988 "The concept of encapsulation as used in
an object-oriented context is not essentially
different from its dictionary definition. It
still refers to building a capsule, in the case
a conceptual barrier, around some collection of
things." -- Wirfs-Brock et al, 1990
  • Modularity is about separation When we worry
    about a small set of related things, we locate
    them in the same place. This is how thousands of
    programmers can work on the same source code and
    make progress.
  • Gabriel and Goldman, 2000

Information Hiding vs. Encapsulation
  • The two are also not equal
  • It is possible to have an encapsulated module
    that has all of its internal structure visible
    from the outside
  • Commonality of module is collected in one place,
    but the inner guts are not hidden (e.g., all
    members of a class are public)

  • Two factors that help increase reliability,
    understandability, efficiency, and
    maintainability within and between modules.
  • Cohesion - within a module
  • Coupling - between modules
  • Provides some initial objective measure to the
    question What makes a good design?

A Brain Teaser Whos joined to whomand by
Global Data
Modular Cohesion
  • The degree of interaction within a module.
  • OR
  • The measure of the strength of functional
    relatedness of elements (an instruction, group of
    instructions, a data definition, or a call to
    another module) within a module.
  • The term was borrowed from sociology by Larry
    Constantine in the mid-1960s, where it means the
    relatedness of humans within groups.

Scale of Cohesion
  • Stevens, Myers, Constantine, and Yourdon
    developed the Scale of Cohesion as a measure of
    the black boxness of a module, and as a result,
    the maintainability of a module.

Scale of Cohesion
Coincidental Cohesion
  • A module whose elements perform multiple,
    completely unrelated actions.
  • Such modules make systems less understandable and
    less maintainable than systems with no modularity
    at all.
  • GrossPay PayRate Hours
  • SalesTax Cost SalesTaxRate
  • Close File 1

Coincidental Cohesion cont.
  • Disadvantages of Coincidental Cohesion
  • Severe lack of maintainability of product.
  • Lack of reusability.
  • Corrective action
  • break the module into smaller modules.

Logical Cohesion
  • Occurs when overlapping parts of functions that
    have the same lines of code or the same buffers,
    but are not even executed at the same time
    (switch statement dispatch).
  • function_code 7
  • New_operation(function_code, d1, d2, d3)
  • // d1, d2, d3 are dummy variables and not
  • // used when function_code 7

Logical Cohesion cont.
  • Disadvantages
  • The interface is difficult to understand.
  • The code for more than one action may be
    intertwined, leading to maintainability problems.
  • The intertwining makes reusability of the module
    difficult, if not impossible.
  • Corrective action
  • separate the functions and rewrite.

Logical (aka Illogical) Cohesion
Not only does a logically cohesive module have
an ugly exterior with maybe a dozen different
parameters fighting to use four accesses, but
its inside resembles a plate of spaghetti mixed
with noodles and worms.
Page-Jones 1988
Temporal Cohesion
  • A module whose elements are involved in
    activities that are related in time.
  • Elements are usually more closely related to
    activities in other modules than they are to one
    another (leads to tight coupling).
  • Disadvantage
  • Lack of reusability in other products.
  • Corrective Action
  • Take the procedure apart and rewrite code as

Procedural Cohesion (skip)
  • A module whose elements are involved in different
    and possibly unrelated activities in which
    control flows from each activity to the next.
  • Related to each other by order of execution
    rather than by any single problem-related
    function (Similar to temporal cohesion).

Communicational Cohesion
  • A module whose elements contribute to activities
    that use the same input or output data.
  • E.g., Update record in database and write it to
    log file
  • Makes the transition into modules more easily
    maintainable, but still not easily reusable.

Informational Cohesion
  • A module whose elements perform a number of
    actions, each with its own entry point, with
    independent code for each action, and all
    performed on the same data structure.
  • Ex. Abstract data types
  • Supports Structured Programming concepts.

Functional Cohesion
  • A module whose elements all contribute to the
    execution of one and only one problem-related
    task (but not necessarily one and only one
  • Systems built chiefly of normally coupled,
    functionally cohesive modules are by far the
    easiest (and thus the cheapest) to maintain.
  • No matter how complicated, the sum of the module
    is one problem-related function.

Functional Cohesion cont.
  • Advantages
  • Concept of module is easily understood.
  • Easily maintained.
  • Product is more easily updatable
  • or changeable.
  • Supports fault isolation (easily testable).
  • Supports heavy Reusability.

Comparisons Between Levels of Cohesion
Modular Coupling
  • The degree of interaction between two modules.

Best (Lowest Interaction)
Normal Data Stamp Control Common Content
Levels of Coupling
Worst (Highest Interaction)
Content (alias Pathological) Coupling
  • Two modules exhibit content coupling if one
    refers to the inside of the other in any way.
  • Value being accessed is not passed through the
    parameter list.
  • Ex. if one module alters a statement in another,
    or updates another modules global state.

Module p Uses local data a
Common (alias Global) Coupling
  • Two modules that refer to the same global data
  • Disadvantages
  • Global areas may sometimes be drastically abused,
    as in when different modules use the same area to
    store quite different pieces of information,
    called overloading.
  • Programs using a lot of global data are extremely
    difficult to understand because of the difficulty
    of knowing what data are used by which module
    (very expensive to correct) def-use pairs hard
    to see
  • Wulf and Shaw Global Variables Considered

Control Coupling
  • Two modules where one passes to the other a piece
    of information intended to control the internal
    logic of the other.
  • Typically through the use of control flags.
  • Disadvantages
  • Leads to indirectness and obscurity.
  • Two modules are not independent.
  • Possibility of reuse is reduced.
  • Generally associated with modules that have
    logical cohesion.

Stamp Coupling
  • Two modules where one passes to the other a
    composite piece of data, that is, a piece of data
    with a meaningful internal structure.
  • Ex. All Employee Personnel Info, instead of just
    the pay rate and SSN.
  • Disadvantages
  • The indirectness can cause a broad interface.
  • Data not necessarily can be accessed by the
    module (creating dependencies between otherwise
    unrelated modules).

Data Coupling
  • Two modules that communicate by parameters, each
    parameter being an elementary piece of data.
  • Communication of data between modules is
    unavoidable and necessary, as long as it is kept
    to a minimum.

Call D Using X, Y
Data Coupling cont.
  • Advantages
  • Avoids sending unnecessary data
  • Is direct.
  • Flexible.
  • Highly reusable.
  • Maintainable.

Data Coupling - Warnings
  • 1. Small is better. Keep the interface as narrow
    as possible.
  • 2. Avoid using tramp data,
  • Data that passes through modules that do not need
    it in order to reach the recipient module
    (AspectJ wormhole example)
  • pieces of information that shuffle aimlessly
    around a system, unwanted by and meaningless to
    most of the modules through which it passes.
    Usually a symptom of poor organization of
  • To varying degrees, tramp data violates all five
    of the principles for good coupling

Comparisons Between Levels of Coupling
The Goal of Good Modularity?
  • High Cohesion
  • Functional or Information
  • Low Coupling
  • Data, Stamp, Control

When to Use What?
  • Cohesions Goal
  • To create a procedure that performs one
    functionally-related task.
  • Couplings Goal
  • To protect global data and local data from being
    used within a procedure without declaring it on
    the procedures header

Both significantly affect maintenance. When
used correctly, maintenance can be reduced when
used incorrectly, maintenance can be a nightmare!
Enough of that.
  • Sample code from last week
  • Parnas paper

Simple C Example
class B public B() void f() cout
ltlt "Bf()" ltlt endl virtual void g()
cout ltlt "Bg()" ltlt endl class D public
B public D() void f() cout ltlt
"Df()" ltlt endl void g() cout ltlt "Dg()"
ltlt endl int main(int,char) B bp
D dp bp new B bp-gtf() bp-gtg()
dp new D dp-gtf() dp-gtg() bp dp
bp-gtf() bp-gtg()
Output Bf() Bg() Df() Dg() Bf() D
A Sketchy Evolution of Software Design
  • 1960s
  • Structured Programming
  • (Goto Considered Harmful, E.W.Dijkstra)
  • Emerged from considerations of formally
    specifying the semantics of programming
    languages, and proving programs satisfy a
  • Adopted into programming languages because its a
    better way to think about programming
  • 1970s
  • Structured Design
  • Methodology/guidelines for dividing programs into
  • 1980s
  • Modular (object-based) programming
  • Ada, Modula, Euclid,
  • Grouping of sub-routines into modules with data.
  • 1990s
  • Object-Oriented Languages started being commonly
    used (60s origin)
  • Object-Oriented Analysis and Design for guidance.

Module Structure
  • David Parnas (birthday anecdote)
  • On the Criteria To Be Used in Decomposing
    Systems into Modules Comm. ACM 15, 12 (Dec.
    1972), 1053-1058
  • Perhaps most popular paper in SE
  • Initial CACM rejection (Nobody does it that
  • Universal acceptance (Parnas wrote about common
  • Discusses modularization
  • Module a collection of subroutines and data
  • Critique of Procedural Design
  • Pointing the way to object-based and OO design.
  • Describes two ways to modularize a program that
    generates KWIC (Key Word in Context) indices.
  • Modularization 1 - Based on the sequence of steps
    to perform
  • Modularization 2 - Based on the principle of
    information hiding

Weiss quote
  • The way to evaluate a modular decomposition,
    particularly one that claims to rest on
    information hiding, is to ask what changes it
  • Hoffman and Weiss, 2001

  • Input
  • Designing Software for Ease of ConstructionFigs
    are Good
  • Output
  • are Good Figs
  • for Ease of Construction Designing Software
  • of Construction Designing Software for Ease
  • Construction Designing Software for Ease of
  • Designing Software for Ease of Construction
  • Ease of Construction Designing Software for
  • Figs are Good
  • Good Figs are
  • Software for Ease of Construction Designing

KWIC Modularization 1
Master control
Input medium
Output medium
KWIC Modularization 2
Master control
Input medium
Output medium
Criteria for decomposition
  • Modularization 1
  • Each major step in the processing was a module
  • Modularization 2
  • Information hiding
  • Each module has one or more "secrets
  • Each module is characterized by its knowledge of
    design decisions which it hides from all others.
  • Lines
  • how characters/lines are stored
  • Circular Shifter
  • algorithm for shifting, storage for shifts
  • Alphabetizer
  • algorithm for alpha, laziness of alpha

General Comparison
  • General
  • Note both systems might share the same data
    structures and the same algorithms
  • Differences are in the way they are divided into
    work assignments
  • Systems are substantially different even if
    identical in the runnable representation
  • Possible because the runnable representation is
    used only for running
  • Other representations are used for
  • Changing
  • Documenting
  • Understanding

Changeability Comparison
  • Design decisions that may change (design1,
  • Input format
  • (1, 1)
  • All lines stored in memory
  • (all, 1)
  • Pack characters 4 to a word
  • (all, 1)
  • Make an index for circular shifts rather than
    store them
  • (3,1)
  • Alphabetize once, rather than either
  • Search for each item as needed
  • Partially alphabetize, partially search
  • (3,1)

Independent Development
  • Modularization 1
  • Must design all data structures before parallel
    work can proceed
  • Complex descriptions needed
  • Modularization 2
  • Must design interfaces before parallel work can
  • Simple descriptions only
  • Comprehensibility
  • Modularization 2 is better
  • Parnas subjective judgment

Comparing Rationales
Modularization 1 Modularization 2
Design Criterion Each major processing step is made into a module Modules are designed using the principle of information hiding
Is task-specific? Yes. For e.g., the Add module is responsible for directly adding a contact into the address book. Yes. For e.g., the Add module is responsible for directly adding a contact into the address book
Inter-dependence HIGH. All modules are heavily dependent on the Data Storage module NONE. All modules are independent!
Information Hiding
  • Before decomposing a system into modules, a list
    of all possible design changes is made - Hiding
    Assumption List
  • Each module hides the implementation of an
    important design decision so that only the
    constituents of that module know the details
  • All design decisions are independent of each other

Information Hiding in Modularization 2
  • Modularization 2 used this principle of
    Information Hiding.
  • All of its modules are independent and have
    well-defined interfaces.
  • There is very low coupling between them.

Information Hiding in Modularization 2
  • Each module is very task-specific. All modules
    are highly cohesive.
  • For example, the sorting algorithm is known only
    to the Sort module. Similarly, the format of data
    storage is known only to the Read/Write Interface

Benefits of Good Modular Design
  • Independent Development
  • Since each module is independent, they can be
    developed independently at the same time ?
    Shortened Development Time!

Benefits of Good Modular Design
  • Changeability, Product Flexibility Reusability
  • Modules can be easily modified without affecting
    the rest of them. Moreover, modules can be easily
    replaced to add, enhance or change product

Benefits of Good Modular Design
  • Comprehensibility
  • It is easier for programmers to fully understand
    the design of the entire product by individually
    studying the modules.

Comprehensibility Quote
  • In many pieces of code the problem of
    disorientation is acute. People have no idea what
    each component of the code is for and they
    experience considerable mental stress as a
  • Gabriel, 1995

Historical Content in Present Context
  • Paper is 30 years old, but only some details
    might make this fact apparent
  • Terminology
  • Previous concerns
  • Past design processes (flowcharts)
  • Changing guidelines
  • Code reuse (not a major point)

Parnas uses some terms that are not used anymore,
or are used nowadays with different meanings,
such as - CORE Then main memory, general
storage space Now internal functionality,
internals - JOB Then Implied batch
processing Now ??? - Nowadays, we speak of
memory in a more abstract way (data structures,
etc). Memory was more often understood as
referring to physical storage (addresses,
Previous Concerns
  • Parnas mentions as major advancements in the
    area of modular programming
  • The development of ASSEMBLERS
  • Nowadays, we could mention higher level
    languages, mainly object-oriented languages that
  • (1) allow one module to be written with little
    knowledge of the code in another module, and
  • (2) allow modules to be reassembled and replaced
    without reassembly of the whole system
  • Aspect Languages

Past Design Processes
  • Use of flowcharts
  • When paper was written, the use of flowchart by
    programmers before was almost mandatory. With a
    flowchart in hands, the programmer would move
    from there to a detailed implementation. This
    caused modularizations like the first one to be
  • Parnas could see the problem with this approach
    and condemned it A flowchart would work ok for a
    small system, but not with a larger one.

Changing Guidelines
  • The sequencing of instructions necessary to call
    a given routine and the routine itself are part
    of the same module.
  • This pertains to worries of programmers at the
    time because they were programming in assembly
    and other low-level languages. Concerns such as
    code optimization were very important and
    involved creating smaller sets of machine
    instructions for a given task.
  • The sequence in which certain items will be
    processed should be hidden within a single
  • It has become irrelevant most times.

Code Reuse
  • Parnas does not emphasize code reuse so much in
    this paper. The reason might be the nature of
    programs written in assembly or lower-level
    languages programmers (not very
  • If the paper were to be reviewed by Parnas,
    reuse would certainly be a point he would
    emphasize more.
  • It is important to notice that these points do
    not disturb the current relevance of Parnas

Effects on Current Programming
  • Fathered key ideas of OOP
  • Information hiding
  • Encapsulation before functional relations
  • Easier understandability/maintainability
  • Design more important than implementation
  • Good design leads to good implementation
  • Proper design allows for different
    implementations (easily modifiable)

New forms of Separation
  • Early plug for course next Fall
  • CS 692/792
  • Reflection and Metaprogramming
  • Advanced Separation of Concerns
  • Model-integrated Computing
  • Adaptive Middleware

Intermingled Decisions
Concern Separation
Making Modules Easier to Change
Parnas Transparency (skip 9.6 and 9.7)OO Design
Principles Open-Closed Liskov
Substitutability Dependency Inversion
  • CS 620/720
  • Software Engineering
  • January 22, 2004

Next Week
  • Tue
  • Reading Demeter
  • Reading Parnas Extension/Contraction
  • Reading Big Ball of Mud
  • Lead into patterns, frameworks, refactoring
  • May not cover all of this!
  • Thu
  • Finish what is left from Tue lecture
  • Patterns intro
  • Quiz 2
  • HW1 assigned

Parnas Transparency
Top Down Design
  • Also called Outside In Design
  • Describes and creates a system from the highest
    hierarchical level where the full specifications
    of a design must be known
  • Difficult or infeasible to obtain full
  • Can result in software that is unnecessarily
  • For these reasons, pure
  • Top Down has problems

Bottom Up Design
  • Create the system Inside Out from a set of
    lower level components (i.e. start at the
  • Work upwards, solving entire project
  • Reuse components from other projects
  • More practical to implement internal structures
    first, creating separate modules and joining them
  • Bottom Up is more flexible. Hard to design
    general purpose system / library using top-down

Bottom Up Design (cont.)
  • As you move up the system hierarchy, you create
    structural levels
  • Base Machine
  • the lower level of a hierarchy, maybe hardware or
    an intermediate software level
  • Virtual Machine
  • a level above the base machine, it hides the
    complexity of the base machine to make
    interaction with the system easier

Transparency in Bottom Up Design
  • Transparency
  • describes the implementation completeness of the
    virtual machine with respect to the base
    machines functionality
  • Complete transparency
  • the virtual machine has ALL of the functionality
    of the base machine
  • Loss of transparency
  • a lack of functionality with respect to the base
    machine exists in the virtual machine
  • There is some sequence that can be specified in
    the base machine that can not be expressed in the
    virtual machine

Driving with strings and steering wheel
Base machine
New virtual machine
Example positions
What figures suggest a loss of transparency? In
this case, is the loss of transparency ok?
Virtual machine for register access
  • Many possible implementations
  • register is an array indexing shifting for
  • register is one-way linked list linear search
  • register is indexed link list
  • register is linked list of small arrays

Completeness of the abstraction
  • What operation does Parnas show is not possible
    in the virtual machine, which suggests a loss of

Other Examples of Transparency
  • Hardware
  • Search Engine

Example Graphics Card Transparency
Hierarchical Level Description
0 Graphics Card silicon
1 Driver
2 API DirectX,OpenGL
3 Application Game, CAD
Graphics Card Example (cont.)
  • Positive results of transparency
  • Much easier to program with API than directly
    with driver. Using an API lets an application
    run on different hardware
  • Negative results of transparency
  • Depending on implementation, an application might
    not run as fast on a particular piece of
    hardware. I.e., it wont fully utilize certain
Write a Comment
User Comments (0)