Title: CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections 5.5-5.7: Complete Knowledge Assumption, Abduction, and Causal Models
1CSCE 580Artificial IntelligenceCh.5 P
Propositions and InferenceSections 5.5-5.7
Complete Knowledge Assumption, Abduction, and
Causal Models
- Fall 2009
- Marco Valtorta
- mgv_at_cse.sc.edu
2Acknowledgment
- The slides are based on AIMA and other sources,
including other fine textbooks - David Poole, Alan Mackworth, and Randy Goebel.
Computational Intelligence A Logical Approach.
Oxford, 1998 - A second edition (by Poole and Mackworth) is
under development. Dr. Poole allowed us to use a
draft of it in this course - Ivan Bratko. Prolog Programming for Artificial
Intelligence, Third Edition. Addison-Wesley,
2001 - The fourth edition is under development
- George F. Luger. Artificial Intelligence
Structures and Strategies for Complex Problem
Solving, Sixth Edition. Addison-Welsey, 2009
3Example of Clarks Completion
4Negation as Failure
5Non-monotonic Reasoning
6Example of Non-monotonic Reasoning
7Bottom-up Negation as Failure Inference Procedure
8Top-down Negation as Failure Inference Procedure
9Top-down Negation as Failure Inference Procedure
(updated on 2009-10-29)
10Abduction
- Abduction is a form of reasoning where
assumptions are made to explain observations. - For example, if an agent were to observe that
some light was not working, it can hypothesize
what is happening in the world to explain why the
light was not working. - An intelligent tutoring system could try to
explain why a student is giving some answer in
terms of what the student understands and does
not understand. - The term abduction was coined by Peirce
(18391914) to differentiate this type of
reasoning from deduction, which is determining
what logically follows from a set of axioms, and
induction, which is inferring general
relationships from examples.
11Abduction with Horn Clauses and Assumables
12Abduction Example
13Another Abduction Example a Causal Model
A causal network
14Consistency-based vs. Abductive Diagnosis
- Determining what is going on inside a system
based on observations about the behavior is the
problem of diagnosis or recognition. - In abductive diagnosis, the agent hypothesizes
diseases and malfunctions, as well as that some
parts are working normally, in order to explain
the observed symptoms. - This differs from consistency-based diagnosis
(page 187) in that the designer models faulty
behavior as well as normal behavior, and the
observations are explained rather than added to
the knowledge base. - Abductive diagnosis requires more detailed
modeling and gives more detailed diagnoses, as
the knowledge base has to be able to actually
prove the observations. - It also allows an agent to diagnose systems where
there is no normal behavior. For example, in an
intelligent tutoring system, observing what a
student does, the tutoring system can hypothesize
what the student understands and does not
understand, which can the guide the action of the
tutoring system.
15Example of Abductive Diagnosis
- In abductive diagnosis, we need to axiomatize
what follows from faults as well as from
normality assumptions. For each atom that could
be observed, we axiomatize how it could be
produced.
This could be seen in design terms as a way to
make sure the light is on put both switches up
or both switches down, and ensure the switches
all work. It could also be seen as a way to
determine what is going on if the agent observed
l1 is lit one of these two scenarios must hold.
16Inference Procedures for Abduction
- The bottom-up and top-down implementations for
assumption-based reasoning with Horn clauses
(page 190) can both be used for abduction. - The bottom-up implementation of Figure 5.9 (page
190) computes, in C, the minimal explanations for
each atom. Instead of returning A ltfalse, Agt in
C, return the set of assumptions for each atom.
The pruning of supersets of assumptions discussed
in the text can also be used. - The top-down implementation can be used to find
the explanations of any g by generating the
conflicts, and, using the same code and knowledge
base, proving g instead of false. The minimal
explanations of g are the minimal sets of
assumables collected to prove g that are not
subsets of conflicts.
17Inference Procedures for Abduction, ctd.
Bottom up
Top down
18Causal Models
- There are many decisions the designer of an agent
needs to make when designing knowledge base for a
domain. For example, consider two propositions a
and b, both of which are true. There are many
choices of how to write this. - A designer could specify have both a and b as
atomic clauses, treating both as primitive. - A designer could have a as primitive and b as
derived, stating a as an atomic clause and giving
the rule blt-a. - Alternatively, the designer could specify the
atomic clause b and the rule alt-b, treating b as
primitive and a as derived. - These representations are logically equivalent
they cannot be distinguished logically. However,
they have different effects when the knowledge
base is changed. Suppose a was no longer true for
some reason. In the first and third
representations, b would still be true and in the
second representation b would no longer true. - A causal model is a representation of a domain
that predicts the results of interventions. An
intervention is an action that forces a variable
to have a particular value.
19Causal vs. Evidential Models
- In order to predict the effect of interventions,
a causal model represents how the cause implies
its effect. When the cause is changed, its effect
should be changed. An evidential model represents
a domain in the other direction,from effect to
cause.
20Another Causal Model Example
21Parts of a Causal Model
22Using a Causal Model