How Many Ways Can I Screw Up Causes of Human Error - PowerPoint PPT Presentation

1 / 57
About This Presentation
Title:

How Many Ways Can I Screw Up Causes of Human Error

Description:

The role and meaning of 'human error' as a cause of defects. Ways to organize our thinking about ... Malfeasance. Sabotage. Embezzlement, theft. Recklessness ... – PowerPoint PPT presentation

Number of Views:233
Avg rating:3.0/5.0
Slides: 58
Provided by: deand9
Category:

less

Transcript and Presenter's Notes

Title: How Many Ways Can I Screw Up Causes of Human Error


1
How Many Ways Can I Screw Up?Causes of Human
Error
  • Dean Deeds
  • July 15, 2009
  • ASQ Section 702 Meeting

2
Whats this talk about, anyhow?
  • The role and meaning of human error as a cause
    of defects
  • Ways to organize our thinking about the reasons
    for these human errors
  • These thoughts are a work in progress.
  • Please share your comments, ideas, references,
    and experiences as we discuss them.
  • Im surely not the first person to think about
    this!

3
A starting point
  • Some of these ideas were inspired by The Limits
    of Expertise Rethinking Pilot Error and the
    Causes of Airline Accidents, R. Key Dismukes,
    Benjamin A. Berman, and Loukia D. Loukopoulos
    (Ashgate Publishing, 2007).

4
General comments and qualifiers
  • Almost by definition, human errors occur
    inconsistently or randomly. For example,
    dexterity is not an all-or-nothing capability
    the human may be able to do the job sometimes but
    not with high repeatability.
  • This applies to most of the categories well be
    discussing.
  • Example Shooting basketball free throws
  • Variation between shooters
  • Variation of a shooter from time to time

5
More general comments and qualifiers
  • Throughout this discussion, operator or a
    similar term includes technicians, engineers,
    office workers, managers, drivers, mechanics,
    pilots, musicians, accountants, government
    officials, scientists, reporters, doctors, etc.
    anybody doing a task that is subject to error.
  • As we review the causes of human error, dont
    overlook the possibility that there may be
    multiple causes at work. This is more common
    than not, and must be understood in order to
    identify the best preventive measures.

6
Disclaimer (the fine print)
  • All assertions, observations, and opinions in
    this document are solely those of the author and
    not those of his employer, co-workers, friends,
    relatives, pets, or houseplants. Any resemblance
    to specific persons, places, or institutions is
    purely coincidental unless explicitly noted.
  • Some of the examples used deal with aircraft and
    flying. This information is not Boeing-specific.
    (I dont even work on aircraft at Boeing.)
    These just happen to be publicly documented
    examples of human error, and they were brought
    to my mind by reading Dismukes et al. Theyre
    useful because they deal with complex and highly
    critical tasks.
  • In other words, Im not speaking for or about
    Boeing! Rather, Im speaking as a customer of
    all the types of operators mentioned above, and
    as an operator in my own right.

7
Human error happens
  • Commonly the cause (or one of the causes) of a
    defect will be described as human error. The
    implication is that any process involving humans
    carries a fixed, unavoidable variability.
  • Of course, any process, whether it involves
    humans or not, carries a potential for
    variability!
  • Machines can fail
  • Materials can yield
  • Environments can change
  • Some of this variability is familiar and
    predictable (and so we can create credible
    estimates of system reliability) some of it is
    unexpected.

8
How do we deal with errors?
  • For non-human failures, we look for insights
    into predictability and (if possible) control.
  • When a tire fails, we take a more careful look at
    the rest of our tires, and how well we maintain
    them.
  • When a beam breaks, we check it for flaws and we
    review the stresses it was under.
  • When we cant be sure of the weather, we may
    carry an umbrella.
  • In other words, we look for tailored solutions to
    manage the variability.
  • But we typically deal with human error
    variability differently. The approaches tend to
    align into two poles.

9
Assume the worst!
  • The first approach to dealing with human error is
    to assume humans are hopeless, and take the
    process out of their hands as much as possible.
    In other words, insulate the end product from
    operator variability. This is the poka yoke
    approach, pioneered by Shigeo Shingo.

Humans are animals that make mistakes.
Shigeo Shingo (Toyota Motor Company)
10
Give em more rope!
  • The second approach is to assume humans can be
    perfect (for a while) if theyre nudged back onto
    the right path.
  • In this approach, the response is typically
    something like
  • counseled the operator
  • reviewed the incident with staff
  • wrote a new rule or law
  • updated instructions, added training material or
    requirements, added to lessons learned
    knowledge base, etc.
  • added incentives for the desired behavior
  • Lets be careful out there!
  • In other words, the priorities given to the
    operator have been shifted for a while. And
    this operator probably wont make the same
    mistake again for a while.

11
Do these approaches work?
  • Are these approaches correct?
  • Do they work?
  • Whats your experience?

12
Do these approaches work?
  • Are these approaches correct?
  • Do they work?
  • Whats your experience?
  • My answer These approaches are correct only if
    theyre addressing the real causes of the error
    (preventing it, or ensuring that it will be
    caught before it creates a defect).
  • They dont always suit the real causes!

13
Does poka yoke work?
  • Poka yoke can be highly effective, in cases where
    it can really be applied.
  • But Im sure your experience includes cases where
    its not possible.
  • Can you suggest examples?

14
Does poka yoke work? (cont.)
  • Poka yoke works
  • Best where physical blocking is possible
  • OK if strong visual or audible cues are possible
  • Less well for purely mental tasks
  • Example Do traffic signals prevent cross traffic
    running red lights?
  • And wed like to prevent errors, not just trigger
    a process to catch them.

If you idiot-proof your process, theyll build a
better idiot.
15
Does training work?
  • There are also problems with the nudge em back
    onto the path approach. This is a
    one-size-fits-all solution to a problem with
    multiple dimensions, and its generally not
    effective because
  • It doesnt identify and address the real causes
    (which are almost NEVER simply about human
    variability, but about the robustness of the
    process)
  • It relies for ongoing effectiveness on human
    memory and attention span.

Experience is the ability to recognize a mistake
when we make it again.
16
So whats the best answer?
  • Find the real causes ...
  • Recognize there may be more than one
  • Then apply appropriate corrective actions ...
  • These may indeed include poka yoke, training, and
    other components.
  • Lets look more carefully at the sorts of causes
    we may find, and the sorts of preventive or
    corrective actions that could be effective.

17
Ignorance
  • Didnt know the facts
  • Didnt know the correct process to follow
  • Heres a case (one of the few!) where
    counseling/training/mentoring/education can
    actually be the best solution.
  • You should also be asking why the operator didnt
    know the facts or process.
  • This may indicate a communication failure, as
    discussed later.

18
Malfeasance
  • Sabotage
  • Embezzlement, theft
  • Recklessness
  • These are not strictly errors, since theyre
    intentional.
  • If youre sure this is a cause, its a matter for
    Human Resources.
  • Recklessness may also be related to
    unconcern, as discussed later.

19
Physical inability
  • Inadequate sense capability (sight, smell, etc.)
  • Inadequate muscle control, dexterity
  • Physical size (e.g., too large or small for
    working space)
  • Illness, disease
  • Aging (may be an indicator for loss of sense
    capability or dexterity) also see bathtub
    curve
  • Affected by medications
  • Affected by alcohol, narcotics, etc.
  • These can affect both physical and mental
    activities.
  • Ability requirements will depend on the task.
  • E.g., hearing is critical for a musician less so
    for an engineer.
  • Color blindness may reduce awareness of visual
    signals.

20
Physical inability (2/2)
  • Counseling is not going to help for most of these
    causes.
  • If dexterity is an issue, physical training may
    help.
  • Pilot or practice work may help, if its a
    repetitive task.
  • For the others, whats required is
  • A clear understanding of the skills needed for
    the task
  • Good communication and understanding of the
    current skill state of the operator and the
    demands of the task.
  • Ultimately, the manager needs to
  • Assign the right operator to the task (with full
    knowledge of his/her current abilities), and
  • Ensure that the task as defined is in fact
    producible (in other words, theres a design or
    engineering aspect to the solution).

(the manager may be same person as the
operator)
21
The bathtub curve
  • Human error rates can exhibit a bathtub curve
    like that of part failure rates.
  • In the learning stage, error rates are high.
  • After some experience is built up, the error rate
    falls and stabilizes.
  • After a long time, the error rate can rise again.
    This can be because of a combination of
    deteriorating skills, forgetting, failure to keep
    abreast of new requirements and tools, boredom,
    cockiness, autopilot, etc.
  • There is some evidence that this pattern is
    reflected in drivers, for example.
  • To avoid the later rise in error rates, we need
    to understand which causes are at play, and apply
    the appropriate solutions.
  • Solution elements may include
  • Rotation to new assignments
  • Mentoring newer operators
  • Clear communication of changing needs

22
Fatigue
  • Physical fatigue overload
  • Mental fatigue long shift, complex activities,
    etc.
  • These can affect both physical and mental
    activities.
  • We can distinguish between fatigue due to the
    specific task where the failure occurs, and
    fatigue due to working for an extended period.

23
Fatigue (2/3)
  • If the task physically overstresses the operator,
    there are several possible causes.
  • The wrong operator may be assigned to the task
    (physical strength, coordination, vision).
  • The operator may be doing the task wrong (e.g.,
    working in an unnecessarily awkward posture).
  • The task may be poorly designed so that its
    difficult for any operator.
  • If the task (properly performed by a capable
    operator) proves physically or mentally
    fatiguing, then the task is probably too long and
    should be broken up into smaller pieces. This
    may require nothing more than ensuring frequent
    breaks.
  • Example driving long distances

24
Fatigue (3/3)
  • If the operator is fatigued due to a combination
    of tasks (a long shift, or outside activities),
    possible preventive measures include
  • Shorter shifts, more frequent breaks
  • Job rotation
  • Team activities (work sharing)
  • Clear understanding and communication of
    operators condition
  • Note that some of these also introduce other
    risks (handoff errors, miscommunication, bad
    assumptions, etc.).
  • Fatigue often occurs in high-pressure
    environments discussed elsewhere.

25
Distraction
  • Loss of focus
  • Interruption
  • Multi-tasking
  • Shift breaks
  • Surprise or unexpected event
  • Calamity, disaster, force majeure
  • Excessive workload
  • Emotion red mist, fog of war
  • Lets break these into predictable and
    unpredictable distractions.

26
Distraction (2/3)
  • Predictable (anticipatable) distractions include
    shift breaks and some task switching.
  • The risk here is that status may be lost
    (forgotten, not communicated or documented), so a
    step may be overlooked. See communication.
  • Among the effective prevention measures are
  • Poka yoke i.e., making it impossible (or at
    least difficult) to miss a step
  • Checklisting providing a verifiable record (for
    the operator and for any other party) of the task
    completion status.

27
Distraction (3/3)
  • Unpredictable distractions are interruptions to
    the expected work flow. They can include
  • Phone calls
  • Instant messages
  • New incoming demands (especially those that
    require on-the-fly resetting of priorities)
  • Abnormal outcomes (anomalous test result, power
    outage, etc.)
  • Whenever we go off the familiar path (standard
    work), were less likely to have effective
    response plans in place, and unfortunately were
    also less likely to realize it.
  • Effective preventive measures may include
  • Breaking tasks into standard work elements
  • Keeping them small enough to complete easily
    without interruption
  • Anticipating contingencies (e.g., disaster
    planning)
  • Applying the predictable distractions
    preventive measures.

28
Forgetting
  • Occasional
  • Recurring
  • This is the classic explanation given for many
    human errors, and the try harder next time
    approach is not very useful against it perhaps
    because forgetting is a description but not an
    explanation.
  • Its worth asking why the operator forgot
    there will usually be other factors (such as
    boredom, distraction or fatigue) that can be
    addressed more effectively.

29
Forgetting (2/3)
  • Information overload can be a significant
    contributor here.
  • Too many tasks, requirements, facts to remember.
  • Related to distraction.

I consider that a man's brain originally is like
a little empty attic, and you have to stock it
with such furniture as you choose. A fool takes
in all the lumber of every sort that he comes
across, so that the knowledge which might be
useful to him gets crowded out, or at best is
jumbled up with a lot of other things so that he
has a difficulty in laying his hands upon it. Now
the skilful workman is very careful indeed as to
what he takes into his brain-attic. He will have
nothing but the tools which may help him in doing
his work, but of these he has a large assortment,
and all in the most perfect order. It is a
mistake to think that that little room has
elastic walls and can distend to any extent.
Depend upon it there comes a time when for every
addition of knowledge you forget something that
you knew before. It is of the highest importance,
therefore, not to have useless facts elbowing out
the useful ones. Sherlock Holmes, in A Study
in Scarlet, Sir Arthur Conan Doyle
30
Forgetting (3/3)
  • Approaches such as poka yoke, self-checks and
    buddy checks, and standard work can help control
    this category.
  • These dont prevent the errors, but may help
    catch them.
  • Recurring forgetfulness may be an indication of a
    health problem (discussed earlier), or of
    unconcern (discussed later).

31
Unconcern
  • Dont see the job as worth my attention, not a
    high priority
  • Lack of engagement
  • Bored
  • Lack of sense of responsibility
  • Proprietary attitude dont tell me how to do
    my job, I already know I have more experience
    than you do
  • No reason to care tenure/seniority, close to
    retirement I dont need this job
  • The operator doesnt have the same objectives as
    the enterprise. Poka yoke may prevent errors
    from becoming defects, but wont reduce errors,
    and may actually increase them because the lack
    of engagement actually increases.
  • The fundamental problem here is building an
    effective team, with the needed support from each
    stakeholder. This isnt easy, but once we
    understand the problem there are tools and
    approaches that may help us solve it.

32
Disagreement
  • Frustrated
  • Not following established procedures I know a
    better way
  • This is waste, not value-added
  • Dont really like the people they work for or
    with
  • This is closely related to the unconcern
    category but in this case, although the
    objectives are aligned, there is disagreement on
    how to achieve them.
  • Again, the solution lies in team-building. Be
    sure you understand the viewpoints of all
    stakeholders maybe theyre right and your plan
    is not the best! At least work with them toward
    a common understanding of the options and the
    pros and cons.
  • Often, someone who says this is not value-added
    really means I just dont want to do this.
    Help the team reason backward from the customers
    requirements to identify what really is
    value-added, or at least necessary waste.

33
Communication
  • Requirements not properly described
  • Requirements not understood
  • Language barrier
  • Cultural differences (especially with partner
    companies and suppliers)
  • Status not documented (e.g., information not
    conveyed at task handoff)
  • Task handoff can include handoffs between
    subsystems and different levels of integration
    for example, between carpenter, electrician,
    plumber, and plasterer.
  • Example In 1999, NASA lost its Mars orbiter
    because different subsystems were engineered in
    different unit systems (English vs. metric).

34
Communication (2/2)
  • The good news is once these causes are
    identified, its pretty straightforward to
    correct them.
  • The bad news is theyre hard to identify,
    because theyre concealed in assumptions that are
    often unconscious.
  • We dont know what we dont know.
  • It takes careful root cause analysis to ferret
    these out (and careful review to prevent them).

35
Out of practice
  • Uncommon or infrequent activity
  • No recent practice, training, certification,
    refresher
  • In these cases refresher training, practice/pilot
    cases, etc., can be effective.
  • Ideally the manager (or the operator) will
    identify these uncommon activities well in
    advance and plan for the needed practice.
  • Example Southern California drivers have higher
    accident rates during the early part of the rainy
    season.

36
Unfamiliar task
  • No documented response/approach exists
  • Response/approach exists but operator doesnt
    know it
  • Operator not familiar with newer process, tools,
    etc.
  • If the task is unfamiliar to this particular
    operator, then training, practice, mentoring,
    feedback, etc., can all be useful ways to improve
    performance.
  • Beware of tasks that are almost the same.
  • Example NHTSA/USC Traffic Safety Center study
    found that the average accident-involved
    motorcyclist had about 3 years of experience, but
    only 5 months on the involved motorcycle.

37
Unfamiliar task (2/2)
  • If there is no known/documented process or
    response to the situation, the best approach is
    slow and methodical.
  • Errors tend to increase when we depart from the
    well-understood path. (See distraction.)
  • If possible
  • Convert the task to a known one
  • Break it into more manageable (ideally, known)
    subtasks
  • Verify that your approach is robust before
    committing to it.
  • See George Pólya, How To Solve It. TRIZ is
    similar.

38
Complex task
  • Unknowns can affect outcome
  • Overtaxes abilities
  • Rapid response required

One of the toughest things for a pilot to do if
you have multiple emergencies is trying to
determine which is the most critical one. Amos
Kardos, veteran US commercial pilot (quoted in
Los Angeles Times, June 13, 2009, on Air France
447 disaster)
39
Complex task (2/2)
  • These are on the boundary between human error
    and outcome up to chance i.e., between
    reducible and not.
  • In a complex situation we dont expect perfect
    performance from a given human.
  • On the other hand, there are certainly some
    humans who perform better in a given sort of
    situation than others (e.g., combat pilots), so
    its possible to some degree to learn the skills,
    or to screen for the better performers.
  • There may be an element of management or planning
    involved
  • Is it possible to break the task down into
    smaller, simpler pieces? to anticipate and
    prepare?
  • Its probably best to analyze this case into the
    other cause categories that may apply.

40
Equipment or design flaws
  • Equipment doesnt respond as needed
  • Equipment doesnt give correct feedback/informatio
    n
  • Can be due to either malfunction or design error
  • Unknowns can prevent breaking complex tasks into
    simpler parts
  • Example Tentative finding that airspeed data for
    Air France 447 may have been incorrect.
  • Were the task elements properly considered in
    equipment design?
  • Was improper maintenance a factor?

41
Deficient information
  • Misleading or incorrect information
  • Missing information
  • The solution seems obvious, but there are
    subtleties.
  • What information is really needed?
  • Is it available?
  • Is it correct? If not, why not?
  • Its also possible to have information overload,
    as discussed earlier.
  • Prioritize the information for display head-up
    displays, alarms, etc.
  • Also see communication.
  • One way to validate a process is with a blind dry
    run (using an operator with no assumptions).

42
Overreliance on automated functions
  • Assumed that safety systems will catch any errors

The stabilization system for the rear control
surfaces went off line. As that system failed,
the plane's computers automatically reduced the
pilots' ability to move the plane's control
surfaces -- technically known as going to
"alternate flight law." Pilot, aeronautical
engineer and former US airline executive Robert
Ditchey said alternate flight law is meant to
protect the plane by reducing the ability of
pilots to make a mistake, but ultimately it also
may limit the ability of pilots to save the
plane. Los Angeles Times, June 13, 2009 (on Air
France flight 447 disaster) Pilot Marvin Renslow
may have made a deadly error by leaving the plane
on autopilot as ice built up on the wings,
highlighting what critics say is a failure of the
Federal Aviation Administration to order pilots
to disengage the autopilot in such conditions.
New York Daily News, Feb. 15, 2009 (on
Continental Connection flight 3407 crash in
Buffalo)
43
Overreliance on automated functions (2/4)
  • The aircraft designers and/or the pilots relied
    on sophisticated designed-in capabilities to make
    the operators task easier (or even possible).
    But the automation could not, or did not, cover
    all the possibilities.
  • Is this good or bad?
  • Did the design recognize and allow for this lack
    of coverage?
  • Did the application (training, instructions,
    practice) recognize and allow for this lack of
    coverage?
  • Theres a paradoxical dividing line here. On one
    hand, we rely on automated functions to make an
    operators job easier, at least in routine
    circumstances. On the other hand, we expect the
    operator to recognize when the automated system
    is not appropriate, and to be able to take
    control and respond appropriately in the most
    demanding circumstances!
  • Defining where to draw this line, in design and
    in training, is not trivial.

44
Overreliance on automated functions (3/4)
  • US Airways flight 1549 landed successfully in the
    Hudson River, after both engines failed upon
    ingesting a flock of Canada geese (Jan. 15,
    2009).
  • The pilot and crew were praised for their cool
    handling of the situation.
  • What factors do you think worked for and against
    success in this emergency? How repeatable do you
    think the outcome would be?
  • Captain Chesley Sullenberger has acknowledged
    that luck, i.e., chance, was clearly a factor.

45
Overreliance on automated functions (4/4)
  • Some lessons and observations from the flight
    1549 example
  • Extensive practice and certification standards
    helped ensure the crew was capable.
  • The crew had both training (i.e., repetition of
    defined tasks) and knowledge (i.e., judgment
    needed for the many decisions to be made).
  • The entire crew coordinated and performed their
    tasks well.
  • These were possible because of risk management
    planning in the design of the aircraft and the
    education and training of the crew (i.e., there
    were provisions and plans for the eventualities
    that did occur).

46
Overreliance on other steps in process
  • Assumed other steps and/or personnel would catch
    any errors
  • Assumed process is correct
  • This is a generalization of the Overreliance on
    automated functions category.
  • This sort of problem is insidious, but youve
    probably seen examples of it yourself, where
    several different sets of eyes overlook the same
    fact.
  • Example typos missed by multiple reviewers

47
Overreliance on other steps in process (2/2)
  • One factor here is group think the tendency of
    people with similar backgrounds to perceive
    situations similarly.
  • In statistical terms, these reviewers are not
    independent there is correlation between their
    outcomes.
  • Another factor is threshold shift. One persons
    standard for what is an acceptable level of
    performance may be influenced by the environment
    he or she works in.
  • Example does your standard for a messy desk
    change as papers pile up?
  • Also see unconcern, pressure and stress, etc.
  • Countermeasures include
  • When using redundant checks, try to have them
    performed by people with different perspectives,
    backgrounds, and motivations.
  • It may be useful to bring in an outsider for a
    fresh perspective.
  • Periodic review/redesign, or independent
    assessment, can flush out unconscious/undocumented
    assumptions or errors in a process or procedure.

48
Obsolescent or incompatible technology
  • Old process or tools no longer supported or
    maintained
  • Only a few old hands are able to make the process
    work correctly
  • Defect rates increase as time goes on and needed
    support is not performed
  • The cost can be insidious because the change is
    gradual and may go unnoticed for a long time.
  • Best strategy may be to anticipate the
    obsolescence
  • Buy a lifetime supply
  • Buy the technology
  • Identify alternatives and be prepared to switch
    over

49
Pressure and stress
  • Rush job
  • High visibility
  • Uncomfortable or unpleasant work environment
  • Time pressure is a common contributor to error
    rates.
  • Rushing affects performance in two ways
  • It encourages us to omit steps intentionally, as
    a form of risk rebalancing. In the extreme case,
    there may simply not be enough time allotted to
    perform all the needed steps.
  • It increases the chance we will omit steps
    unintentionally.

Haste makes waste Act in haste, repent at
leisure More haste, less speed.
50
Pressure and stress (2/3)
  • Pressure and stress can also originate from
    outside.
  • Personal problems, issues, and concerns can
    certainly create distractions (another category,
    see above).
  • Combining multiple stresses or distractions has a
    compounding, worse-than-the-sum-of-the-parts
    effect.
  • In extreme cases people tend to be overwhelmed by
    competing priorities and may simply freeze or
    perform extremely poorly.
  • The only effective solution to this is good task
    management
  • Eliminate competing messages and expectations
  • Break the job into simple, well-defined standard
    tasks
  • Clear the desk and focus on one thing until its
    complete.
  • Managers need to filter out conflicting messages.
    It also helps to view the situation from the
    operators perspective.

51
Pressure and stress (3/3)
  • The effect of the work environment on operator
    effectiveness is often overlooked or
    misunderstood.
  • Noise and other distractions can break focus
    efficiency and quality predictably suffer.
  • The same is true of a cultural environment where
    its considered normal to interrupt.
  • Example Answering cell phones, pagers, email
    while in a different meeting
  • Also see distraction.

52
Connections among the categories
53
Another case study Metrolink 111
  • Metrolink passenger train 111 collided with a
    Union Pacific freight train on a 45-degree curve
    at 423 PM, Sept. 13, 2008, in Chatsworth, CA.
    The freight train had the right of way.
  • What was the cause?

54
Metrolink 111 (2/3)
  • There were multiple causes reported. Among them
  • The Metrolink engineer was sending text messages
    at the time.
  • The curve, and the landscape to the inside of it,
    blocked the engineers view down the track.
  • The Metrolink engineer was working a long shift,
    on a tedious and stressful job.
  • The signal facing the Metrolink engineer was not
    working. (The evidence went back and forth on
    this point, and there is disagreement as to the
    facts.)
  • The signal facing the Metrolink engineer was hard
    to see because of the angle of the sunlight.
  • There was no interlock that would have positively
    prevented the Metrolink train from proceeding
    from its parallel track onto the shared track.
    Such interlocks have long been common in some
    rail systems, but in this system they were not
    used. (This led to much public second-guessing
    and recrimination.)

55
Metrolink 111 (3/3)
  • In other words, there was not a single cause
    there were multiple factors that contributed to
    the outcome.
  • We can see several of our categories in the
    causes listed above distraction, boredom,
    fatigue, stress, inadequate information,
    overreliance on automated functions or other
    steps in process, equipment or design flaws, etc.
  • Will addressing only one of these causes solve
    the problem?

56
How does all this fit into the CAPA process?
  • These categories by themselves are not the root
    causes of errors.
  • Theyre simply a checklist to help organize our
    thinking.
  • To identify the root causes, we need to dig
    deeper.
  • For example, use the five WHYs approach.
  • We can use these categories as the first WHYs.
  • Corrective or preventive actions then need to
    follow from the root causes.
  • Example I committed errors because I was rushed.
  • Why was I rushed?
  • Did I not allow myself enough time to do the job
    comfortably?
  • Did I not understand the difficulty of the job?
  • Did I not have the proper resources lined up
    (tools, information, skills, etc.)?
  • Depending on what I find, my actions may include
    changes to planning, scheduling, and workload
    balancing.

57
Summary
  • Human error by itself is not a useful
    explanation for causes of defects.
  • The categories weve listed can serve as a
    checklist for working down to root causes.
  • Remember, there are often multiple causes!
  • Solutions should be tailored to the real causes.
  • The conventional solutions (poka yoke and
    training) for human error may not be
    appropriate or effective. If you rely on them,
    you may be fooling yourself.
Write a Comment
User Comments (0)
About PowerShow.com