How Many Ways Can I Screw Up Causes of Human Error - PowerPoint PPT Presentation

1 / 57
About This Presentation

How Many Ways Can I Screw Up Causes of Human Error


The role and meaning of 'human error' as a cause of defects. Ways to organize our thinking about ... Malfeasance. Sabotage. Embezzlement, theft. Recklessness ... – PowerPoint PPT presentation

Number of Views:233
Avg rating:3.0/5.0
Slides: 58
Provided by: deand9


Transcript and Presenter's Notes

Title: How Many Ways Can I Screw Up Causes of Human Error

How Many Ways Can I Screw Up?Causes of Human
  • Dean Deeds
  • July 15, 2009
  • ASQ Section 702 Meeting

Whats this talk about, anyhow?
  • The role and meaning of human error as a cause
    of defects
  • Ways to organize our thinking about the reasons
    for these human errors
  • These thoughts are a work in progress.
  • Please share your comments, ideas, references,
    and experiences as we discuss them.
  • Im surely not the first person to think about

A starting point
  • Some of these ideas were inspired by The Limits
    of Expertise Rethinking Pilot Error and the
    Causes of Airline Accidents, R. Key Dismukes,
    Benjamin A. Berman, and Loukia D. Loukopoulos
    (Ashgate Publishing, 2007).

General comments and qualifiers
  • Almost by definition, human errors occur
    inconsistently or randomly. For example,
    dexterity is not an all-or-nothing capability
    the human may be able to do the job sometimes but
    not with high repeatability.
  • This applies to most of the categories well be
  • Example Shooting basketball free throws
  • Variation between shooters
  • Variation of a shooter from time to time

More general comments and qualifiers
  • Throughout this discussion, operator or a
    similar term includes technicians, engineers,
    office workers, managers, drivers, mechanics,
    pilots, musicians, accountants, government
    officials, scientists, reporters, doctors, etc.
    anybody doing a task that is subject to error.
  • As we review the causes of human error, dont
    overlook the possibility that there may be
    multiple causes at work. This is more common
    than not, and must be understood in order to
    identify the best preventive measures.

Disclaimer (the fine print)
  • All assertions, observations, and opinions in
    this document are solely those of the author and
    not those of his employer, co-workers, friends,
    relatives, pets, or houseplants. Any resemblance
    to specific persons, places, or institutions is
    purely coincidental unless explicitly noted.
  • Some of the examples used deal with aircraft and
    flying. This information is not Boeing-specific.
    (I dont even work on aircraft at Boeing.)
    These just happen to be publicly documented
    examples of human error, and they were brought
    to my mind by reading Dismukes et al. Theyre
    useful because they deal with complex and highly
    critical tasks.
  • In other words, Im not speaking for or about
    Boeing! Rather, Im speaking as a customer of
    all the types of operators mentioned above, and
    as an operator in my own right.

Human error happens
  • Commonly the cause (or one of the causes) of a
    defect will be described as human error. The
    implication is that any process involving humans
    carries a fixed, unavoidable variability.
  • Of course, any process, whether it involves
    humans or not, carries a potential for
  • Machines can fail
  • Materials can yield
  • Environments can change
  • Some of this variability is familiar and
    predictable (and so we can create credible
    estimates of system reliability) some of it is

How do we deal with errors?
  • For non-human failures, we look for insights
    into predictability and (if possible) control.
  • When a tire fails, we take a more careful look at
    the rest of our tires, and how well we maintain
  • When a beam breaks, we check it for flaws and we
    review the stresses it was under.
  • When we cant be sure of the weather, we may
    carry an umbrella.
  • In other words, we look for tailored solutions to
    manage the variability.
  • But we typically deal with human error
    variability differently. The approaches tend to
    align into two poles.

Assume the worst!
  • The first approach to dealing with human error is
    to assume humans are hopeless, and take the
    process out of their hands as much as possible.
    In other words, insulate the end product from
    operator variability. This is the poka yoke
    approach, pioneered by Shigeo Shingo.

Humans are animals that make mistakes.
Shigeo Shingo (Toyota Motor Company)
Give em more rope!
  • The second approach is to assume humans can be
    perfect (for a while) if theyre nudged back onto
    the right path.
  • In this approach, the response is typically
    something like
  • counseled the operator
  • reviewed the incident with staff
  • wrote a new rule or law
  • updated instructions, added training material or
    requirements, added to lessons learned
    knowledge base, etc.
  • added incentives for the desired behavior
  • Lets be careful out there!
  • In other words, the priorities given to the
    operator have been shifted for a while. And
    this operator probably wont make the same
    mistake again for a while.

Do these approaches work?
  • Are these approaches correct?
  • Do they work?
  • Whats your experience?

Do these approaches work?
  • Are these approaches correct?
  • Do they work?
  • Whats your experience?
  • My answer These approaches are correct only if
    theyre addressing the real causes of the error
    (preventing it, or ensuring that it will be
    caught before it creates a defect).
  • They dont always suit the real causes!

Does poka yoke work?
  • Poka yoke can be highly effective, in cases where
    it can really be applied.
  • But Im sure your experience includes cases where
    its not possible.
  • Can you suggest examples?

Does poka yoke work? (cont.)
  • Poka yoke works
  • Best where physical blocking is possible
  • OK if strong visual or audible cues are possible
  • Less well for purely mental tasks
  • Example Do traffic signals prevent cross traffic
    running red lights?
  • And wed like to prevent errors, not just trigger
    a process to catch them.

If you idiot-proof your process, theyll build a
better idiot.
Does training work?
  • There are also problems with the nudge em back
    onto the path approach. This is a
    one-size-fits-all solution to a problem with
    multiple dimensions, and its generally not
    effective because
  • It doesnt identify and address the real causes
    (which are almost NEVER simply about human
    variability, but about the robustness of the
  • It relies for ongoing effectiveness on human
    memory and attention span.

Experience is the ability to recognize a mistake
when we make it again.
So whats the best answer?
  • Find the real causes ...
  • Recognize there may be more than one
  • Then apply appropriate corrective actions ...
  • These may indeed include poka yoke, training, and
    other components.
  • Lets look more carefully at the sorts of causes
    we may find, and the sorts of preventive or
    corrective actions that could be effective.

  • Didnt know the facts
  • Didnt know the correct process to follow
  • Heres a case (one of the few!) where
    counseling/training/mentoring/education can
    actually be the best solution.
  • You should also be asking why the operator didnt
    know the facts or process.
  • This may indicate a communication failure, as
    discussed later.

  • Sabotage
  • Embezzlement, theft
  • Recklessness
  • These are not strictly errors, since theyre
  • If youre sure this is a cause, its a matter for
    Human Resources.
  • Recklessness may also be related to
    unconcern, as discussed later.

Physical inability
  • Inadequate sense capability (sight, smell, etc.)
  • Inadequate muscle control, dexterity
  • Physical size (e.g., too large or small for
    working space)
  • Illness, disease
  • Aging (may be an indicator for loss of sense
    capability or dexterity) also see bathtub
  • Affected by medications
  • Affected by alcohol, narcotics, etc.
  • These can affect both physical and mental
  • Ability requirements will depend on the task.
  • E.g., hearing is critical for a musician less so
    for an engineer.
  • Color blindness may reduce awareness of visual

Physical inability (2/2)
  • Counseling is not going to help for most of these
  • If dexterity is an issue, physical training may
  • Pilot or practice work may help, if its a
    repetitive task.
  • For the others, whats required is
  • A clear understanding of the skills needed for
    the task
  • Good communication and understanding of the
    current skill state of the operator and the
    demands of the task.
  • Ultimately, the manager needs to
  • Assign the right operator to the task (with full
    knowledge of his/her current abilities), and
  • Ensure that the task as defined is in fact
    producible (in other words, theres a design or
    engineering aspect to the solution).

(the manager may be same person as the
The bathtub curve
  • Human error rates can exhibit a bathtub curve
    like that of part failure rates.
  • In the learning stage, error rates are high.
  • After some experience is built up, the error rate
    falls and stabilizes.
  • After a long time, the error rate can rise again.
    This can be because of a combination of
    deteriorating skills, forgetting, failure to keep
    abreast of new requirements and tools, boredom,
    cockiness, autopilot, etc.
  • There is some evidence that this pattern is
    reflected in drivers, for example.
  • To avoid the later rise in error rates, we need
    to understand which causes are at play, and apply
    the appropriate solutions.
  • Solution elements may include
  • Rotation to new assignments
  • Mentoring newer operators
  • Clear communication of changing needs

  • Physical fatigue overload
  • Mental fatigue long shift, complex activities,
  • These can affect both physical and mental
  • We can distinguish between fatigue due to the
    specific task where the failure occurs, and
    fatigue due to working for an extended period.

Fatigue (2/3)
  • If the task physically overstresses the operator,
    there are several possible causes.
  • The wrong operator may be assigned to the task
    (physical strength, coordination, vision).
  • The operator may be doing the task wrong (e.g.,
    working in an unnecessarily awkward posture).
  • The task may be poorly designed so that its
    difficult for any operator.
  • If the task (properly performed by a capable
    operator) proves physically or mentally
    fatiguing, then the task is probably too long and
    should be broken up into smaller pieces. This
    may require nothing more than ensuring frequent
  • Example driving long distances

Fatigue (3/3)
  • If the operator is fatigued due to a combination
    of tasks (a long shift, or outside activities),
    possible preventive measures include
  • Shorter shifts, more frequent breaks
  • Job rotation
  • Team activities (work sharing)
  • Clear understanding and communication of
    operators condition
  • Note that some of these also introduce other
    risks (handoff errors, miscommunication, bad
    assumptions, etc.).
  • Fatigue often occurs in high-pressure
    environments discussed elsewhere.

  • Loss of focus
  • Interruption
  • Multi-tasking
  • Shift breaks
  • Surprise or unexpected event
  • Calamity, disaster, force majeure
  • Excessive workload
  • Emotion red mist, fog of war
  • Lets break these into predictable and
    unpredictable distractions.

Distraction (2/3)
  • Predictable (anticipatable) distractions include
    shift breaks and some task switching.
  • The risk here is that status may be lost
    (forgotten, not communicated or documented), so a
    step may be overlooked. See communication.
  • Among the effective prevention measures are
  • Poka yoke i.e., making it impossible (or at
    least difficult) to miss a step
  • Checklisting providing a verifiable record (for
    the operator and for any other party) of the task
    completion status.

Distraction (3/3)
  • Unpredictable distractions are interruptions to
    the expected work flow. They can include
  • Phone calls
  • Instant messages
  • New incoming demands (especially those that
    require on-the-fly resetting of priorities)
  • Abnormal outcomes (anomalous test result, power
    outage, etc.)
  • Whenever we go off the familiar path (standard
    work), were less likely to have effective
    response plans in place, and unfortunately were
    also less likely to realize it.
  • Effective preventive measures may include
  • Breaking tasks into standard work elements
  • Keeping them small enough to complete easily
    without interruption
  • Anticipating contingencies (e.g., disaster
  • Applying the predictable distractions
    preventive measures.

  • Occasional
  • Recurring
  • This is the classic explanation given for many
    human errors, and the try harder next time
    approach is not very useful against it perhaps
    because forgetting is a description but not an
  • Its worth asking why the operator forgot
    there will usually be other factors (such as
    boredom, distraction or fatigue) that can be
    addressed more effectively.

Forgetting (2/3)
  • Information overload can be a significant
    contributor here.
  • Too many tasks, requirements, facts to remember.
  • Related to distraction.

I consider that a man's brain originally is like
a little empty attic, and you have to stock it
with such furniture as you choose. A fool takes
in all the lumber of every sort that he comes
across, so that the knowledge which might be
useful to him gets crowded out, or at best is
jumbled up with a lot of other things so that he
has a difficulty in laying his hands upon it. Now
the skilful workman is very careful indeed as to
what he takes into his brain-attic. He will have
nothing but the tools which may help him in doing
his work, but of these he has a large assortment,
and all in the most perfect order. It is a
mistake to think that that little room has
elastic walls and can distend to any extent.
Depend upon it there comes a time when for every
addition of knowledge you forget something that
you knew before. It is of the highest importance,
therefore, not to have useless facts elbowing out
the useful ones. Sherlock Holmes, in A Study
in Scarlet, Sir Arthur Conan Doyle
Forgetting (3/3)
  • Approaches such as poka yoke, self-checks and
    buddy checks, and standard work can help control
    this category.
  • These dont prevent the errors, but may help
    catch them.
  • Recurring forgetfulness may be an indication of a
    health problem (discussed earlier), or of
    unconcern (discussed later).

  • Dont see the job as worth my attention, not a
    high priority
  • Lack of engagement
  • Bored
  • Lack of sense of responsibility
  • Proprietary attitude dont tell me how to do
    my job, I already know I have more experience
    than you do
  • No reason to care tenure/seniority, close to
    retirement I dont need this job
  • The operator doesnt have the same objectives as
    the enterprise. Poka yoke may prevent errors
    from becoming defects, but wont reduce errors,
    and may actually increase them because the lack
    of engagement actually increases.
  • The fundamental problem here is building an
    effective team, with the needed support from each
    stakeholder. This isnt easy, but once we
    understand the problem there are tools and
    approaches that may help us solve it.

  • Frustrated
  • Not following established procedures I know a
    better way
  • This is waste, not value-added
  • Dont really like the people they work for or
  • This is closely related to the unconcern
    category but in this case, although the
    objectives are aligned, there is disagreement on
    how to achieve them.
  • Again, the solution lies in team-building. Be
    sure you understand the viewpoints of all
    stakeholders maybe theyre right and your plan
    is not the best! At least work with them toward
    a common understanding of the options and the
    pros and cons.
  • Often, someone who says this is not value-added
    really means I just dont want to do this.
    Help the team reason backward from the customers
    requirements to identify what really is
    value-added, or at least necessary waste.

  • Requirements not properly described
  • Requirements not understood
  • Language barrier
  • Cultural differences (especially with partner
    companies and suppliers)
  • Status not documented (e.g., information not
    conveyed at task handoff)
  • Task handoff can include handoffs between
    subsystems and different levels of integration
    for example, between carpenter, electrician,
    plumber, and plasterer.
  • Example In 1999, NASA lost its Mars orbiter
    because different subsystems were engineered in
    different unit systems (English vs. metric).

Communication (2/2)
  • The good news is once these causes are
    identified, its pretty straightforward to
    correct them.
  • The bad news is theyre hard to identify,
    because theyre concealed in assumptions that are
    often unconscious.
  • We dont know what we dont know.
  • It takes careful root cause analysis to ferret
    these out (and careful review to prevent them).

Out of practice
  • Uncommon or infrequent activity
  • No recent practice, training, certification,
  • In these cases refresher training, practice/pilot
    cases, etc., can be effective.
  • Ideally the manager (or the operator) will
    identify these uncommon activities well in
    advance and plan for the needed practice.
  • Example Southern California drivers have higher
    accident rates during the early part of the rainy

Unfamiliar task
  • No documented response/approach exists
  • Response/approach exists but operator doesnt
    know it
  • Operator not familiar with newer process, tools,
  • If the task is unfamiliar to this particular
    operator, then training, practice, mentoring,
    feedback, etc., can all be useful ways to improve
  • Beware of tasks that are almost the same.
  • Example NHTSA/USC Traffic Safety Center study
    found that the average accident-involved
    motorcyclist had about 3 years of experience, but
    only 5 months on the involved motorcycle.

Unfamiliar task (2/2)
  • If there is no known/documented process or
    response to the situation, the best approach is
    slow and methodical.
  • Errors tend to increase when we depart from the
    well-understood path. (See distraction.)
  • If possible
  • Convert the task to a known one
  • Break it into more manageable (ideally, known)
  • Verify that your approach is robust before
    committing to it.
  • See George Pólya, How To Solve It. TRIZ is

Complex task
  • Unknowns can affect outcome
  • Overtaxes abilities
  • Rapid response required

One of the toughest things for a pilot to do if
you have multiple emergencies is trying to
determine which is the most critical one. Amos
Kardos, veteran US commercial pilot (quoted in
Los Angeles Times, June 13, 2009, on Air France
447 disaster)
Complex task (2/2)
  • These are on the boundary between human error
    and outcome up to chance i.e., between
    reducible and not.
  • In a complex situation we dont expect perfect
    performance from a given human.
  • On the other hand, there are certainly some
    humans who perform better in a given sort of
    situation than others (e.g., combat pilots), so
    its possible to some degree to learn the skills,
    or to screen for the better performers.
  • There may be an element of management or planning
  • Is it possible to break the task down into
    smaller, simpler pieces? to anticipate and
  • Its probably best to analyze this case into the
    other cause categories that may apply.

Equipment or design flaws
  • Equipment doesnt respond as needed
  • Equipment doesnt give correct feedback/informatio
  • Can be due to either malfunction or design error
  • Unknowns can prevent breaking complex tasks into
    simpler parts
  • Example Tentative finding that airspeed data for
    Air France 447 may have been incorrect.
  • Were the task elements properly considered in
    equipment design?
  • Was improper maintenance a factor?

Deficient information
  • Misleading or incorrect information
  • Missing information
  • The solution seems obvious, but there are
  • What information is really needed?
  • Is it available?
  • Is it correct? If not, why not?
  • Its also possible to have information overload,
    as discussed earlier.
  • Prioritize the information for display head-up
    displays, alarms, etc.
  • Also see communication.
  • One way to validate a process is with a blind dry
    run (using an operator with no assumptions).

Overreliance on automated functions
  • Assumed that safety systems will catch any errors

The stabilization system for the rear control
surfaces went off line. As that system failed,
the plane's computers automatically reduced the
pilots' ability to move the plane's control
surfaces -- technically known as going to
"alternate flight law." Pilot, aeronautical
engineer and former US airline executive Robert
Ditchey said alternate flight law is meant to
protect the plane by reducing the ability of
pilots to make a mistake, but ultimately it also
may limit the ability of pilots to save the
plane. Los Angeles Times, June 13, 2009 (on Air
France flight 447 disaster) Pilot Marvin Renslow
may have made a deadly error by leaving the plane
on autopilot as ice built up on the wings,
highlighting what critics say is a failure of the
Federal Aviation Administration to order pilots
to disengage the autopilot in such conditions.
New York Daily News, Feb. 15, 2009 (on
Continental Connection flight 3407 crash in
Overreliance on automated functions (2/4)
  • The aircraft designers and/or the pilots relied
    on sophisticated designed-in capabilities to make
    the operators task easier (or even possible).
    But the automation could not, or did not, cover
    all the possibilities.
  • Is this good or bad?
  • Did the design recognize and allow for this lack
    of coverage?
  • Did the application (training, instructions,
    practice) recognize and allow for this lack of
  • Theres a paradoxical dividing line here. On one
    hand, we rely on automated functions to make an
    operators job easier, at least in routine
    circumstances. On the other hand, we expect the
    operator to recognize when the automated system
    is not appropriate, and to be able to take
    control and respond appropriately in the most
    demanding circumstances!
  • Defining where to draw this line, in design and
    in training, is not trivial.

Overreliance on automated functions (3/4)
  • US Airways flight 1549 landed successfully in the
    Hudson River, after both engines failed upon
    ingesting a flock of Canada geese (Jan. 15,
  • The pilot and crew were praised for their cool
    handling of the situation.
  • What factors do you think worked for and against
    success in this emergency? How repeatable do you
    think the outcome would be?
  • Captain Chesley Sullenberger has acknowledged
    that luck, i.e., chance, was clearly a factor.

Overreliance on automated functions (4/4)
  • Some lessons and observations from the flight
    1549 example
  • Extensive practice and certification standards
    helped ensure the crew was capable.
  • The crew had both training (i.e., repetition of
    defined tasks) and knowledge (i.e., judgment
    needed for the many decisions to be made).
  • The entire crew coordinated and performed their
    tasks well.
  • These were possible because of risk management
    planning in the design of the aircraft and the
    education and training of the crew (i.e., there
    were provisions and plans for the eventualities
    that did occur).

Overreliance on other steps in process
  • Assumed other steps and/or personnel would catch
    any errors
  • Assumed process is correct
  • This is a generalization of the Overreliance on
    automated functions category.
  • This sort of problem is insidious, but youve
    probably seen examples of it yourself, where
    several different sets of eyes overlook the same
  • Example typos missed by multiple reviewers

Overreliance on other steps in process (2/2)
  • One factor here is group think the tendency of
    people with similar backgrounds to perceive
    situations similarly.
  • In statistical terms, these reviewers are not
    independent there is correlation between their
  • Another factor is threshold shift. One persons
    standard for what is an acceptable level of
    performance may be influenced by the environment
    he or she works in.
  • Example does your standard for a messy desk
    change as papers pile up?
  • Also see unconcern, pressure and stress, etc.
  • Countermeasures include
  • When using redundant checks, try to have them
    performed by people with different perspectives,
    backgrounds, and motivations.
  • It may be useful to bring in an outsider for a
    fresh perspective.
  • Periodic review/redesign, or independent
    assessment, can flush out unconscious/undocumented
    assumptions or errors in a process or procedure.

Obsolescent or incompatible technology
  • Old process or tools no longer supported or
  • Only a few old hands are able to make the process
    work correctly
  • Defect rates increase as time goes on and needed
    support is not performed
  • The cost can be insidious because the change is
    gradual and may go unnoticed for a long time.
  • Best strategy may be to anticipate the
  • Buy a lifetime supply
  • Buy the technology
  • Identify alternatives and be prepared to switch

Pressure and stress
  • Rush job
  • High visibility
  • Uncomfortable or unpleasant work environment
  • Time pressure is a common contributor to error
  • Rushing affects performance in two ways
  • It encourages us to omit steps intentionally, as
    a form of risk rebalancing. In the extreme case,
    there may simply not be enough time allotted to
    perform all the needed steps.
  • It increases the chance we will omit steps

Haste makes waste Act in haste, repent at
leisure More haste, less speed.
Pressure and stress (2/3)
  • Pressure and stress can also originate from
  • Personal problems, issues, and concerns can
    certainly create distractions (another category,
    see above).
  • Combining multiple stresses or distractions has a
    compounding, worse-than-the-sum-of-the-parts
  • In extreme cases people tend to be overwhelmed by
    competing priorities and may simply freeze or
    perform extremely poorly.
  • The only effective solution to this is good task
  • Eliminate competing messages and expectations
  • Break the job into simple, well-defined standard
  • Clear the desk and focus on one thing until its
  • Managers need to filter out conflicting messages.
    It also helps to view the situation from the
    operators perspective.

Pressure and stress (3/3)
  • The effect of the work environment on operator
    effectiveness is often overlooked or
  • Noise and other distractions can break focus
    efficiency and quality predictably suffer.
  • The same is true of a cultural environment where
    its considered normal to interrupt.
  • Example Answering cell phones, pagers, email
    while in a different meeting
  • Also see distraction.

Connections among the categories
Another case study Metrolink 111
  • Metrolink passenger train 111 collided with a
    Union Pacific freight train on a 45-degree curve
    at 423 PM, Sept. 13, 2008, in Chatsworth, CA.
    The freight train had the right of way.
  • What was the cause?

Metrolink 111 (2/3)
  • There were multiple causes reported. Among them
  • The Metrolink engineer was sending text messages
    at the time.
  • The curve, and the landscape to the inside of it,
    blocked the engineers view down the track.
  • The Metrolink engineer was working a long shift,
    on a tedious and stressful job.
  • The signal facing the Metrolink engineer was not
    working. (The evidence went back and forth on
    this point, and there is disagreement as to the
  • The signal facing the Metrolink engineer was hard
    to see because of the angle of the sunlight.
  • There was no interlock that would have positively
    prevented the Metrolink train from proceeding
    from its parallel track onto the shared track.
    Such interlocks have long been common in some
    rail systems, but in this system they were not
    used. (This led to much public second-guessing
    and recrimination.)

Metrolink 111 (3/3)
  • In other words, there was not a single cause
    there were multiple factors that contributed to
    the outcome.
  • We can see several of our categories in the
    causes listed above distraction, boredom,
    fatigue, stress, inadequate information,
    overreliance on automated functions or other
    steps in process, equipment or design flaws, etc.
  • Will addressing only one of these causes solve
    the problem?

How does all this fit into the CAPA process?
  • These categories by themselves are not the root
    causes of errors.
  • Theyre simply a checklist to help organize our
  • To identify the root causes, we need to dig
  • For example, use the five WHYs approach.
  • We can use these categories as the first WHYs.
  • Corrective or preventive actions then need to
    follow from the root causes.
  • Example I committed errors because I was rushed.
  • Why was I rushed?
  • Did I not allow myself enough time to do the job
  • Did I not understand the difficulty of the job?
  • Did I not have the proper resources lined up
    (tools, information, skills, etc.)?
  • Depending on what I find, my actions may include
    changes to planning, scheduling, and workload

  • Human error by itself is not a useful
    explanation for causes of defects.
  • The categories weve listed can serve as a
    checklist for working down to root causes.
  • Remember, there are often multiple causes!
  • Solutions should be tailored to the real causes.
  • The conventional solutions (poka yoke and
    training) for human error may not be
    appropriate or effective. If you rely on them,
    you may be fooling yourself.
Write a Comment
User Comments (0)