ISE 195 Introduction to Industrial Engineering - PowerPoint PPT Presentation

Loading...

PPT – ISE 195 Introduction to Industrial Engineering PowerPoint presentation | free to download - id: 83fef7-Y2Q2Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

ISE 195 Introduction to Industrial Engineering

Description:

ISE 195 Introduction to Industrial Engineering – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 45
Provided by: Preins237
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: ISE 195 Introduction to Industrial Engineering


1
ISE 195 Introduction to Industrial Engineering
2
Lecture 4 Decision Analysis
3
Decision Analysis
  • What is the hardest decision you have ever had to
    make?
  • Since we all have to make decisions, we are all
    Decision Makers of a sort and can benefit from
    the study of decision making.
  • Have you ever had to make a decision and then
    later have to explain or defend that decision?

4
Decision Domains
  • Personal domain
  • Where to live college to attend car to buy etc
  • Business domain
  • Introduce the new product bid on a contract
    hire
  • Government domain
  • How to allocate money where to get involved

5
Decision Roles
  • Those who study decisions will be referred to as
    decision analysts while those that make the
    decisions will be referred to as the decision
    makers.
  • Why do you think we would want to separate the
    roles of the decision analyst and the decision
    maker?
  • Proper decision making requires collaboration
    among the decision makers and the decision
    analysts in order to find the best solution based
    on insights versus position

6
Why Decisions Are Hard
  • Decisions are hard for a number of structural,
    emotional, and organizational reasons
  • Structural uncertainty, trade-offs, complexity
  • Emotional anxiety, multiple objectives,
    competition
  • Organizational lack of consensus, differing
    perspectives

7
Why Decisions Are Hard
  • Do you think your personal decisions are going to
    be easier or harder than the decisions you might
    be faced with in business (engineering)?
  • What might be some of the reasons, both obvious
    and less obvious, for this difference in level of
    complexity between decisions from the personal
    domain and decisions from the business or
    government domain?

8
Why Decisions Are Hard
  • There are other reasons decisions are hard
  • Consequences
  • Uncertainty
  • Ambiguity

9
Why Decisions Are Hard
Consequences
MEDIUM
HIGH
LOW
Uncertainty
Ambiguity
CAU Model, Skinner
10
Why Decisions Are Hard
Consequences
Uncertainty
Ambiguity
CAU Model, Skinner
11
What Makes A Good Decision
  • What is a good decision?
  • What is a good outcome?
  • Does a good decision always lead to a good
    outcome?
  • Name some examples. . .
  • A good decision emerges as the result of valid
    decision making process (of which there are a few
    as we will see)

12
  • When you come to a fork in the road, take it
  • - Yogi Berra

13
History
  • Operational research, quantitative management,
    based on repetitive actions
  • Focused on optimizing objectives and meeting
    constraints
  • Failed to focus on needs of executive decision
    making
  • In particular their more complex, strategic
    problems
  • Technique needed for logical guidance on complex,
    uncertain situations
  • DA combines systems analysis and statistical
    decision theory

14
History
  • Problems typical of DA application are
  • Unique
  • Important
  • Contain uncertainty
  • Have long-run implications
  • Contain complex preferences
  • DA arose in the late 60s, early 70s and balances
    the following OR considerations
  • Mathematical modeling
  • Computer implementation
  • Quantitative analysis and decision making

15
History
  • DA also incorporated the following aspects of
    human decision making
  • Management experience
  • Management judgment
  • Management preferences
  • The art of DA involves capturing the above from
    the managers and decision makers
  • The techniques used to capture the above are
    sometimes controversial within the operational
    research / systems engineering field

16
Terminology
  • Decision
  • A conscious irrevocable allocation of resources
    with the purpose of achieving a desired objective
  • Uncertainty
  • Something that is unknown or not perfectly known
  • Outcomes
  • Depend on alternative chosen and the
    uncertainties impacting it
  • Value
  • Something the decision maker wants and can
    tradeoff

17
Terminology
  • Objective
  • Something specific the decision maker wants to
    achieve
  • Decision Maker
  • Anyone with the authority to allocate the
    necessary resources for the decision being made
  • Subjective Probability
  • Classical approach to probability called the
    frequentist approach
  • Subjective approach, the Bayesian, allows that
    each of us can provide valid probabilities

18
  • Probabilistic Methods
  • (Pay attention is ISE 301!)
  • These assume the possible outcomes (states of
    nature) can be assigned probabilities that
    represent their likelihood of occurrence.
  • Also referred to as methods for decision making
    under risk

19
Expected Monetary Value
  • Selects alternative with the largest expected
    monetary value (EMV)
  • EMVi is the average payoff we would receive if we
    faced the same decision problem numerous times
    and always selected alternative i.

20
Decision Trees
  • Graphical means for displaying a decision problem
    that shows, in chronological order
  • the alternatives available to the decision
    maker
  • the futures that could be experienced and
  • the consequences of choosing between alternatives
  • Trees consist of
  • Branches lines representing possible decision
    paths
  • Decision Forks nodes which represent choices
    to be made by the decision maker and
  • Chance Forks nodes which represent possible
    futures that are modeled as selected by nature

21
Decision Trees (continued)
  • To evaluate a tree one must
  • assign values of an appropriate evaluation
    measure to each branch (often summarized at the
    end of the branch) and
  • choose branches appropriately at each decision
    node, working from right to left
  • When making decisions under risk, this entails
  • assigning probabilities to each branch emanating
    from a chance fork
  • computing expected values at each chance node
    and
  • finding the branch that maximizes the expected
    value from among all branches emanating from a
    decision fork.

22
Example Electronics Firm
  • An electronics firm makes components that are
    sold and shipped to an automobile manufacturer.
  • Five percent of all components produced are
    defective due to poor solder connections.
  • Cant tell if defective until after it is
    installed on a car.
  • Auto maker will charge the electronics firm 800
    per defective component to cover the cost of
    repair.
  • A proposal double-solder each component before
    before it is shipped to the automobile maker.
  • Will cost 50 per component to double-solder but
    is sure to eliminate this cause of defective
    components
  • i.e., no double-soldered components will be
    defective.

23
Example Electronics Firm (Continued)
  • Is the proposal worthwhile?
  • Assume electronics firm seeks to minimize its
    expected cost and consider using our structure
  • Actions 1 -- double solder before shipping 2
    -- do not double solder
  • Outcomes 1 -- component is defective 2 --
    component is good
  • Prior Probabilities P1 0.05 P2 0.95
  • Note that these probabilities apply only if the
    component is not double-soldered!
  • Value Function E11 -50 E12 -50
    E21 -800 E22 0

Values are negative costs here
24
Electronics Firm Decision Tree
  • Expected Values --
    Double-Solder E(V)1 -50 Do Not E(V)2
    -800(0.05) 0(0.95) -40
  • No, it would not be worthwhile to double-solder
    every component since the maximum expected value
    (minimum expected cost) is obtained for action 2
    (do not double-solder).
  • Decision Tree

Double Solder
-50
X
-40
Defective (0.05)
-40
-800
Do Not
Good (0.95)
0
25
Example Doing Better?
  • To this point, we have assumed that the firm is
    unable to tell if a component is defective until
    after it is installed on a car.
  • Obviously, if the firm were to know in advance
    that a component was defective, it would
    double-solder that component.
  • A reasonable strategy, then, might be to attempt
    to determine whether or not a component is
    defective before the decision to double-solder or
    not is made.
  • How much should the firm be willing to pay to for
    this sort of information?

26
Example Paying for More Information
  • Without any advance info about components, the
    firms best strategy is to not double-solder any
    components
  • This has an expected cost of 40 per component.
  • With advance info, however, the firm should
  • double-solder all defective components at a cost
    of 50 each, and
  • not double-solder the rest (the good components).
  • Since 5 of all components are defective, the
    expected cost of this strategy would be
  • 50(0.05) 0(0.95) 2.50 per component.
  • Thus, the most the firm should be willing to pay
    for this advance info is the difference between
    these, or
  • 40 - 2.50 37.50 per component.

27
Getting Advance Information
  • Advance information can often be obtained by
    performing some sort of test before making a
    decision
  • If so, then the initial choice we must make is
    whether or not to do the testing
  • The ideal situation would be one in which the
    testing enables us to correctly predict the
    future
  • In our example, this would mean that the test is
    100 accurate in classifying components as good
    or defective
  • e.g., if the test classifies a component as
    defective, then that component is indeed
    defective.

28
Decision Tree w/ Perfect Testing
Double Solder
-40
-50
X
Do Not Test
Defective (0.05)
-40
-800
No Action
Good (0.95)
0
Classify as Defective (0.05)
Double Solder
-50
-50
Perform Test (Get Advance Info)
Defective (1.00)
-800
-800
X
No Action
-2.5
Good (0.00)
0
Classify as Good (0.95)
Double Solder
-50
0
X
Defective (0.00)
0
-800
No Action
Good (1.00)
0
  • We assume here that testing is perfect, so
    that all components will be correctly classified
    and, thus, 95 will be classified as good while
    5 will be classified as defective

29
Bayesian Decision Making
  • A method for accounting for the effects of
    advance testing in decision making
  • Based on Bayes Theorem which provides us a way
    to revise our initial prior probabilities for
    the occurrence of each possible future given the
    results of testing

30
Example Revisited
  • Suppose now that the firm can choose to test each
    component, at a cost of 20 apiece, to see if the
    component might be defective before the decision
    to double-solder or not is made.
  • The test is not perfect, but they have a track
    record
  • Based on the results of testing known good and
    known defective components, it is determined
    that
  • the test will incorrectly classify 15 of all
    defective components as good, and
  • incorrectly classify 10 of all good components
    as defective.
  • Is it worthwhile for the firm to perform this
    test?

31
Decision Tree w/ Testing
Double Solder
-40
-50
X
Do Not Test
Defective (0.05)
-40
-800
No Action
Good (0.95)
0
Double Solder
Classify as Defective (???)
-50
-20
Defective (???)
Perform Test
-800
No Action
Good (???)
0
Classify as Good (???)
Double Solder
-50
Defective (???)
-800
No Action
Good (???)
0
  • To evaluate this tree and decide what to do, we
    need to fill in appropriate probabilities at all
    chance forks.

32
Example Description of Test
  • Note that there are two possible results when a
    component is subjected to the proposed test
  • Result 1 The component is classified as
    defective
  • Result 2 The component is classified as good
  • The particular result to be obtained will depend
    on both
  • the state of nature (the condition of the
    component being tested), and,
  • since the experiment is not perfect, also on
    chance.
  • What we know about the accuracy of the test is
    captured by the conditional probabilities of
    obtaining a particular result given a particular
    state of nature . . .

33
Description of Test Continued
  • That is, we know that our experiment will
    incorrectly classify 15 of all defective
    components as good and 10 of all good components
    as defective.
  • We denote this using the notation
  • Pcomponent classified as defective it is
    defective PResult 1 State of nature 1 ?
    Q11 0.85
  • Pcomponent classified as good it is defective
    PResult 2 State of nature 1 ? Q21 0.15
  • Pcomponent classified as defective it is good
    PResult 1 State of nature 2 ? Q12 0.10
  • Pcomponent classified as good it is good
    PResult 2 State of nature 2 ? Q22 0.90
  • Unfortunately, these are not the probabilities we
    need to complete the decision tree!

34
Bayes Theorem
  • To compute the conditional probabilities of
    encountering each possible future given the
    results of the test, we combine previous results
    to obtain
  • Bayes Theorem If are n
    mutually exclusive and exhaustive events defined
    over a sample space and E is any other event with
    PE gt 0, then
  • Bottom Line the posterior probability of
    encountering state of nature j (j 1, 2, . . .,
    m) given that the test produces result k (k 1,
    2, . . ., r) can be found via

Bayes Rule
The Law of Total Probability
35
Example Posterior Probabilities
  • For the electronics firm, we can then compute
  • P11 Pcomponent is defective classified as
    defective
  • PState of nature 1Result 1
  • PResult 1State of nature 1 ? PState of
    nature 1 ? PResult 1
  • Q11P1/Q1
  • (0.85)(0.05)/(0.1375) ? 0.3091
  • Likewise
  • P21 Pcomponent is good classified as
    defective
  • PState of nature 2Result 1
  • Q12P2/Q1
  • (0.10)(0.95)/(0.1375) ? 0.6909

36
Example Posterior Probabilities (Continued)
  • Similarly
  • P12 Pcomponent is defective classified as
    good
  • Q21P1/Q2
  • (0.15)(0.05)/(0.8675) ? 0.0087
  • and
  • P22 Pcomponent is good classified as good
  • Q22P2/Q2
  • (0.90)(0.95)/(0.8675) ? 0.9913
  • Terminology the quantity QkjPj formed in the
    preceding computations is often called the joint
    probability of encountering state of nature j and
    obtaining result k from the experiment (since it
    is the probability of both the two events
    occurring).

37
Example Decision Tree (Revisited)
Double Solder
-40
-50
X
Do Not Test
Defective (0.05)
-40
-800
No Action
X
Good (0.95)
0
Double Solder
-50
Classify as Defective (0.1375)
-32.88
-50
-20
-247.47
Defective (0.3091)
Perform Test
-800
No Action
-12.88
X
Good (0.6909)
0
-6.96
Classify as Good (0.8625)
Double Solder
-50
X
-6.96
Defective (0.0087)
-800
No Action
Good (0.9913)
0
38
Discrete Probability Assessment
  • Three methods for assessing discrete probability
  • Direct question
  • Assumes familiarity with probability
  • Usually means decision maker used to providing
    probabilities based on similar experiences
  • Betting method
  • Most people bet in some fashion
  • odds provide perception of likelihood of the
    outcome
  • Use the odds to derive the probability
  • Reference lottery
  • Find probability yielding indifference point

39
Experts and Assessments
  • Reliance on experts important in complex problems
  • Important to avoid bias in assessment and
    collection
  • Protocol for expert assessment
  • Background
  • Identify and recruit experts
  • Motivate the experts
  • Structure and decompose the problem
  • Probability assessment training
  • Probability elicitation and verification
  • Aggregation of distributions

40
Theoretical Probability Models
  • Subjective probabilities may be difficult to get
  • Alternative is to use some theoretical
    distribution
  • Actually making a subjective assessment via your
    choice
  • A variety of distributions apply in a variety of
    applications
  • Binomial
  • Normal
  • Exponential
  • Triangular

41
Other Decision Factors Risk
  • Have not worried about risk
  • Decision makers may actually have differing
    attitudes toward risk
  • Would like a model to map outcomes into measures
    that incorporate attitudes towards risk
  • In decision analysis this is accomplished using
    utility functions
  • The corresponding outcomes, not measured in
    utilities, may provide different alternative
    selections than those not using utilities
  • Changes due to incorporation of risk

42
Other Decision Factors Multi-attribute
Decisions
  • Have focused on a single attribute
  • Most decisions are multi-attribute in nature
  • Trade-off between weight and redundancy
  • Trade-off between reliability and maintenance
  • Some multi-attribute models assume independence
  • Assess each attribute
  • Develop a weighting scheme for each attribute
  • Use weighted sum of scores
  • Called an additive model

43
Multiple Attributes
  • Reality in most multi-attribute models requires
    some form of interaction
  • Attributes are not independent
  • Need to derive a utility surface
  • Techniques for determining the surface are
    extensions of independent techniques
  • Complications come during elicitation as the
    expert is asked to specifically consider
    dependencies

44
  • ISE 195 Overview of Decision Analysis
  • Questions?
About PowerShow.com