CSCE 580 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic - PowerPoint PPT Presentation

About This Presentation
Title:

CSCE 580 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic

Description:

The Resolution Refutation Proof Technique for First-Order Logic. Fall ... The other textbooks I considered are: David Poole, Alan Mackworth, and Randy Goebel. ... – PowerPoint PPT presentation

Number of Views:558
Avg rating:3.0/5.0
Slides: 57
Provided by: MarcoVa
Category:

less

Transcript and Presenter's Notes

Title: CSCE 580 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic


1
CSCE 580Artificial IntelligenceThe Resolution
Refutation Proof Technique for First-Order Logic
  • Fall 2009
  • Marco Valtorta
  • mgv_at_cse.sc.edu

2
Acknowledgment
  • The slides are based on the textbook AIMA and
    other sources, including other fine textbooks and
    the accompanying slide sets
  • The other textbooks I considered are
  • David Poole, Alan Mackworth, and Randy Goebel.
    Computational Intelligence A Logical Approach.
    Oxford, 1998
  • A second edition (by Poole and Mackworth) is
    under development. Dr. Poole allowed us to use a
    draft of it in this course
  • Ivan Bratko. Prolog Programming for Artificial
    Intelligence, Third Edition. Addison-Wesley,
    2001
  • The fourth edition is under development
  • George F. Luger. Artificial Intelligence
    Structures and Strategies for Complex Problem
    Solving, Sixth Edition. Addison-Welsey, 2009

3
Outline
  • Reducing first-order inference to propositional
    inference
  • Unification
  • Generalized Modus Ponens
  • Forward chaining
  • Backward chaining
  • Resolution

4
Universal instantiation (UI)
  • Every instantiation of a universally quantified
    sentence is entailed by it
  • ?v aSubst(v/g, a)
  • for any variable v and ground term g
  • E.g., ?x King(x) ? Greedy(x) ? Evil(x) yields
  • King(John) ? Greedy(John) ? Evil(John)
  • King(Richard) ? Greedy(Richard) ? Evil(Richard)
  • King(Father(John)) ? Greedy(Father(John)) ?
    Evil(Father(John))
  • .
  • .
  • .

5
Existential instantiation (EI)
  • For any sentence a, variable v, and constant
    symbol k that does not appear elsewhere in the
    knowledge base
  • ?v a
  • Subst(v/k, a)
  • E.g., ?x Crown(x) ? OnHead(x,John) yields
  • Crown(C1) ? OnHead(C1,John)
  • provided C1 is a new constant symbol, called a
    Skolem constant
  • Logical equivalence is not preserved, because
    skolemization adds new constants to formulas
    however, the new KB is satisfiable iff the old
    one is satisfiable (s-equivalence)
  • In general, skolemization adds Skolem function,
    as in
  • ?x ?y Is_Father(y,x), which skolemizes to ?x
    Is_Father(Father(x),x)

6
Reduction to propositional inference
  • Suppose the KB contains just the following
  • ?x King(x) ? Greedy(x) ? Evil(x)
  • King(John)
  • Greedy(John)
  • Brother(Richard,John)
  • Instantiating the universal sentence in all
    possible ways, we have
  • King(John) ? Greedy(John) ? Evil(John)
  • King(Richard) ? Greedy(Richard) ? Evil(Richard)
  • King(John)
  • Greedy(John)
  • Brother(Richard,John)
  • The new KB is propositionalized proposition
    symbols are
  • King(John), Greedy(John), Evil(John),
    King(Richard), etc.

7
Reduction ctd.
  • Every FOL KB can be propositionalized so as to
    preserve entailment
  • (A ground sentence is entailed by new KB iff
    entailed by original KB)
  • Idea propositionalize KB and query, apply
    resolution, return result
  • Problem with function symbols, there are
    infinitely many ground terms,
  • e.g., Father(Father(Father(John)))

8
Reduction ctd.
  • Theorem Herbrand (1930). If a sentence a is
    entailed by an FOL KB, it is entailed by a finite
    subset of the propositionalized KB
  • Note Herbrand showed that no new constants have
    to be introduced (so, in the example, the only
    constants needed are John and Richard), except
    for one in case the KB contains no constants, in
    which case one constant must be introduced
  • Idea For n 0 to 8 do
  • create a propositional KB by instantiating
    with depth-n terms
  • see if a is entailed by this KB
  • Problem works if a is entailed, may loop forever
    if a is not entailed
  • Theorem Turing (1936), Church (1936) Entailment
    for FOL is semidecidable (algorithms exist that
    say yes to every entailed sentence, but no
    algorithm exists that also says no to every
    non-entailed sentence.)

9
Problems with propositionalization
  • Propositionalization seems to generate lots of
    irrelevant sentences.
  • E.g., from
  • ?x King(x) ? Greedy(x) ? Evil(x)
  • King(John)
  • ?y Greedy(y)
  • Brother(Richard,John)
  • it seems obvious that Evil(John), but
    propositionalization produces lots of facts such
    as Greedy(Richard) that are irrelevant
  • With p k-ary predicates and n constants, there
    are pnk instantiations.

10
Unification
  • We can get the inference immediately if we can
    find a substitution ? such that King(x) and
    Greedy(x) match King(John) and Greedy(y)
  • ? x/John,y/John works
  • Unify(a,ß) ? if a? ß?
  • p q ?
  • Knows(John,x) Knows(John,Jane)
  • Knows(John,x) Knows(y,OJ)
  • Knows(John,x) Knows(y,Mother(y))
  • Knows(John,x) Knows(x,OJ)
  • Standardizing apart eliminates overlap of
    variables, e.g., Knows(z17,OJ)

11
Unification
  • We can get the inference immediately if we can
    find a substitution ? such that King(x) and
    Greedy(x) match King(John) and Greedy(y)
  • ? x/John,y/John works
  • Unify(a,ß) ? if a? ß?
  • p q ?
  • Knows(John,x) Knows(John,Jane) x/Jane
  • Knows(John,x) Knows(y,OJ)
  • Knows(John,x) Knows(y,Mother(y))
  • Knows(John,x) Knows(x,OJ)
  • Standardizing apart eliminates overlap of
    variables, e.g., Knows(z17,OJ)

12
Unification
  • We can get the inference immediately if we can
    find a substitution ? such that King(x) and
    Greedy(x) match King(John) and Greedy(y)
  • ? x/John,y/John works
  • Unify(a,ß) ? if a? ß?
  • p q ?
  • Knows(John,x) Knows(John,Jane) x/Jane
  • Knows(John,x) Knows(y,OJ) x/OJ,y/John
  • Knows(John,x) Knows(y,Mother(y))
  • Knows(John,x) Knows(x,OJ)
  • Standardizing apart eliminates overlap of
    variables, e.g., Knows(z17,OJ)

13
Unification
  • We can get the inference immediately if we can
    find a substitution ? such that King(x) and
    Greedy(x) match King(John) and Greedy(y)
  • ? x/John,y/John works
  • Unify(a,ß) ? if a? ß?
  • p q ?
  • Knows(John,x) Knows(John,Jane) x/Jane
  • Knows(John,x) Knows(y,OJ) x/OJ,y/John
  • Knows(John,x) Knows(y,Mother(y)) y/John,x/Mothe
    r(John)
  • Knows(John,x) Knows(x,OJ)
  • Standardizing apart eliminates overlap of
    variables, e.g., Knows(z17,OJ)

14
Unification
  • We can get the inference immediately if we can
    find a substitution ? such that King(x) and
    Greedy(x) match King(John) and Greedy(y)
  • ? x/John,y/John works
  • Unify(a,ß) ? if a? ß?
  • p q ?
  • Knows(John,x) Knows(John,Jane) x/Jane
  • Knows(John,x) Knows(y,OJ) x/OJ,y/John
  • Knows(John,x) Knows(y,Mother(y)) y/John,x/Mothe
    r(John)
  • Knows(John,x) Knows(x,OJ) fail
  • Standardizing apart eliminates overlap of
    variables, e.g., Knows(z17,OJ)

15
Unification
  • To unify Knows(John,x) and Knows(y,z),
  • ? y/John, x/z or ? y/John, x/John,
    z/John
  • The first unifier is more general than the
    second.
  • There is a single most general unifier (MGU) that
    is unique up to renaming of variables.
  • MGU y/John, x/z

16
The unification algorithm
17
The unification algorithm
Occurs check when unifying a variable x with a
term T, check whether x occurs in T. Note that
this should be done after the already found
substitutions are applied. For example, unifying
t(x,f(x)) and t(g(y),y) occur checks because
after x\g(y) is applied, one obtains
t(g(y),f(g(y))) and t(g(y),y). Unifying f(g(y))
with y leads to the attempted substitution
y\g(y), which triggers the occurs check, leading
to failure.
18
Generalized Modus Ponens (GMP)
  • p1', p2', , pn', ( p1 ? p2 ? ? pn ?q)
  • q?
  • E.g.,
  • p1' is King(John) p1 is King(x)
  • p2' is Greedy(y) p2 is Greedy(x)
  • ? is x/John,y/John q is Evil(x)
  • q ? is Evil(John)
  • GMP used with KB of definite clauses (exactly one
    positive literal)
  • All variables assumed universally quantified

where pi'? pi ? for all i
19
Soundness of GMP
  • Need to show that
  • p1', , pn', (p1 ? ? pn ? q) q?
  • provided that pi'? pi? for all I
  • Lemma For any sentence p, we have p p? by UI
  • (p1 ? ? pn ? q) (p1 ? ? pn ? q)? (p1? ?
    ? pn? ? q?)
  • p1', , pn' p1' ? ? pn' p1'? ? ? pn'?
  • From 1 and 2, q? follows by ordinary Modus Ponens

20
Example knowledge base
  • The law says that it is a crime for an American
    to sell weapons to hostile nations. The country
    Nono, an enemy of America, has some missiles, and
    all of its missiles were sold to it by Colonel
    West, who is American.
  • Prove that Col. West is a criminal

21
Example knowledge base ctd.
  • ... it is a crime for an American to sell weapons
    to hostile nations
  • American(x) ? Weapon(y) ? Sells(x,y,z) ?
    Hostile(z) ? Criminal(x)
  • Nono has some missiles, i.e., ?x Owns(Nono,x) ?
    Missile(x)
  • Owns(Nono,M1) and Missile(M1)
  • all of its missiles were sold to it by Colonel
    West
  • Missile(x) ? Owns(Nono,x) ? Sells(West,x,Nono)
  • Missiles are weapons
  • Missile(x) ? Weapon(x)
  • An enemy of America counts as "hostile
  • Enemy(x,America) ? Hostile(x)
  • West, who is American
  • American(West)
  • The country Nono, an enemy of America
  • Enemy(Nono,America)
  • Note This definite clause KB has no functions.
    It is therefore a Datalog KB

22
Forward chaining algorithm
23
Forward chaining proof
24
Forward chaining proof
25
Forward chaining proof
26
Properties of forward chaining
  • Sound and complete for first-order definite
    clauses
  • Datalog first-order definite clauses no
    functions
  • FC terminates for Datalog in finite number of
    iterations
  • May not terminate in general if a is not entailed
  • This is unavoidable entailment with definite
    clauses is semidecidable

27
Efficiency of forward chaining
  • Incremental forward chaining no need to match a
    rule on iteration k if a premise wasn't added on
    iteration k-1
  • ? match each rule whose premise contains a newly
    added positive literal
  • The rete algorithm builds a dataflow network to
    allow for reuse of matchings it is used in
    production system languages such as OPS-5
  • Matching itself can be expensive
  • Database indexing allows O(1) retrieval of known
    facts
  • e.g., query Missile(x) retrieves Missile(M1)
  • Forward chaining is widely used in deductive
    databases

28
Hard matching example
Diff(wa,nt) ? Diff(wa,sa) ? Diff(nt,q) ?
Diff(nt,sa) ? Diff(q,nsw) ? Diff(q,sa) ?
Diff(nsw,v) ? Diff(nsw,sa) ? Diff(v,sa) ?
Colorable() Diff(Red,Blue) Diff (Red,Green)
Diff(Green,Red) Diff(Green,Blue) Diff(Blue,Red)
Diff(Blue,Green)
  • Colorable() is inferred iff the CSP has a
    solution
  • CSPs include 3SAT as a special case, hence
    matching is NP-hard

29
Backward chaining algorithm
  • SUBST(COMPOSE(?1, ?2), p) SUBST(?2, SUBST(?1,
    p))

30
Backward chaining example
31
Backward chaining example
32
Backward chaining example
33
Backward chaining example
34
Backward chaining example
35
Backward chaining example
36
Backward chaining example
37
Backward chaining example
38
Properties of backward chaining
  • Depth-first recursive proof search space is
    linear in size of proof
  • Incomplete due to infinite loops
  • ? fix by checking current goal against every goal
    on stack
  • Inefficient due to repeated subgoals (both
    success and failure)
  • ? fix using caching of previous results (extra
    space)
  • Widely used for logic programming

39
Logic programming Prolog
  • Algorithm Logic Control
  • Basis backward chaining with Horn clauses
    bells whistles
  • Widely used in Europe, Japan (basis of 5th
    Generation project)
  • Compilation techniques ? 60 million LIPS
  • Program set of clauses head - literal1,
    literaln.
  • criminal(X) - american(X), weapon(Y),
    sells(X,Y,Z), hostile(Z).
  • Depth-first, left-to-right backward chaining
  • Built-in predicates for arithmetic etc., e.g., X
    is YZ3
  • Built-in predicates that have side effects (e.g.,
    input and output predicates, assert/retract
    predicates)
  • Closed-world assumption ("negation as failure")
  • e.g., given alive(X) - not dead(X).
  • alive(joe) succeeds if dead(joe) fails

40
Prolog
  • Appending two lists to produce a third
  • append(,Y,Y).
  • append(XL,Y,XZ) - append(L,Y,Z).
  • query ?-append(A,B,1,2)
  • answers A B1,2
  • A1 B2
  • A1,2 B

41
Backward chaining is incomplete
  • Goal trees are built by backward chaining
  • C is true, as shown in the following (natural
    deduction) proof by cases (split on D)
  • It is impossible to prove C by backward chaining
  • Two additions make backward chaining complete
  • addition of contrapositives (ex C -gt D)
  • ancestor checking

42
PTTP A Prolog Technology Theorem Prover
  • Prolog is not a full theorem prover for three
    main reasons
  • It uses an unsound unification algorithm without
    the occurs check
  • Its inference system is complete for Horn
    clauses, but not for more general formulas, as
    shown in the previous slide
  • Its unbounded depth-first search strategy is
    incomplete. Also, it cannot display the proofs it
    finds
  • The Prolog Technology Theorem Prover (PTTP)
    overcomes these limitations by
  • transforming clauses so that head literals have
    no repeated variables and unification without the
    occurs check is valid remaining unification is
    done using complete unification with the occurs
    check in the body
  • adding contrapositives of clauses (so that any
    literal, not just a distinguished head literal,
    can be resolved on) and the model- elimination
    procedure reduction rule that matches goals with
    the negations of their ancestor goals
  • using a sequence of bounded depth-first searches
    to prove a theorem
  • retaining information on what formulas are used
    for each inference so that the proof can be
    printed
  • Proof of soundness and completeness follows from
    the soundness and completeness of resolution
    refutation

43
Resolution brief summary
  • Full first-order version
  • l1 ? ? lk, m1 ? ? mn
  • (l1 ? ? li-1 ? li1 ? ? lk ? m1 ? ?
    mj-1 ? mj1 ? ? mn)?
  • where Unify(li, ?mj) ?.
  • The two clauses are assumed to be standardized
    apart so that they share no variables.
  • For example,
  • ?Rich(x) ? Unhappy(x)
  • Rich(Ken)
  • Unhappy(Ken)
  • with ? x/Ken
  • Apply resolution steps to CNF(KB ? ?a) complete
    for FOL
  • Full disclosure For completeness, one needs
    either
  • general (non-binary) resolution, in which subsets
    of literals that are unifiable are resolved, or
  • Factoring, which replaces unifiable atoms
    literals in a clause with a single atom

44
Conversion to CNF
  • Everyone who loves all animals is loved by
    someone
  • ?x ?y Animal(y) ? Loves(x,y) ? ?y Loves(y,x)
  • 1. Eliminate biconditionals and implications
  • ?x ??y ?Animal(y) ? Loves(x,y) ? ?y
    Loves(y,x)
  • 2. Move ? inwards ??x p ?x ?p, ? ?x p ?x ?p
  • ?x ?y ?(?Animal(y) ? Loves(x,y)) ? ?y
    Loves(y,x)
  • ?x ?y ??Animal(y) ? ?Loves(x,y) ? ?y
    Loves(y,x)
  • ?x ?y Animal(y) ? ?Loves(x,y) ? ?y Loves(y,x)

45
Conversion to CNF contd.
  • Standardize variables each quantifier should use
    a different one
  • ?x ?y Animal(y) ? ?Loves(x,y) ? ?z Loves(z,x)
  • Skolemize a more general form of existential
    instantiation.
  • Each existential variable is replaced by a Skolem
    function of the enclosing universally quantified
    variables
  • ?x Animal(F(x)) ? ?Loves(x,F(x)) ?
    Loves(G(x),x)
  • Drop universal quantifiers
  • Animal(F(x)) ? ?Loves(x,F(x)) ? Loves(G(x),x)
  • Distribute ? over ?
  • Animal(F(x)) ? Loves(G(x),x) ? ?Loves(x,F(x))
    ? Loves(G(x),x)

46
Completeness of Resolution
  • Resolution is refutation-complete if S is an
    unsatisfiable set of clauses, then the
    application of a finite number of resolution
    steps to S will yield a contradiction

47
Resolution proof definite clauses
  • This is an input resolution proof

48
Resolution proof general case
  • Curiosity killed the cat pp.298-300 AIMA-2
  • Not a unit resolution proof not an input
    resolution proof

49
Resolution Strategies
  • Breadth-First Strategy (complete)
  • Set-of-Support Strategy (complete)
  • At least one parent of each resolvent is selected
    from among the clauses resulting from the
    negation of the goal or from their descendants
    (the set of support)
  • Unit-Preference Strategy (complete)
  • Unit Resolution (complete for Horn clauses
    forward chaining)
  • Each resolvent has a parent that is a unit clause
  • Linear-Input Form Strategy (not complete)
  • Each resolvent has at least one parent belonging
    to the base set (i.e. the set of clauses given as
    input)
  • Complete for Horn clauses
  • Called input resolution in AIMA
  • Ancestry-Filtered Form Strategy (complete)
  • Each resolvent has a parent that is either in the
    base set or an ancestor of the other parent
  • Called linear resolution in AIMA

50
Example KB Nilsson, 1980
  • Whoever can read is literate
  • (1) R(x) ? L(x)
  • Dolphins are not literate
  • (2) D(x) ? L(x)
  • Some dolphins are intelligent
  • (3a) D(A)
  • (3b) I(A)
  • Goal Some who are intelligent cannot read
  • (4) I(z) ? R(z)
  • Whoever can read is literate
  • ?xR(x) ?L(x)
  • Dolphins are not literate
  • ?xD(x) ?L(x)
  • Some dolphins are intelligent
  • ?xD(x) ? I(x)
  • Goal Some who are intelligent cannot read
  • ?xI(x) ? R(x)

51
An example proof
  • (5) R(A) resolvent of 3b and 4
  • (6) L(A) resolvent of 5 and 1
  • (7) D(A) resolvent of 6 and 2
  • (8) NIL resolvent of 7 and 3a

52
Breadth-first strategy
53
Set-of-Support Strategy
54
Linear-Input Form Strategy
55
Ancestry-Filtered Form Strategy
56
Four more examples
  • Prove by resolution the result of exercise 8.2
    AIMA-2
  • The goal is a universal formula!
  • Group theory axioms and some consequences of them
    Schoening, example on pp.94-95
  • (a) every dragon is happy if all its children can
    fly (b) green dragons can fly (c) a dragon is
    green if it is a child of at least one green
    dragon show that all green dragons are happy
  • (a) Every barber shaves all persons who do not
    shave themselves (b) no barber shaves any person
    who shaves himself or herself show that there
    are no barbers
  • Shows the need for factoring or non-binary
    resolution (p.297 AIMA-2)
Write a Comment
User Comments (0)
About PowerShow.com