Inverse Entailment in Nonmonotonic Logic Programs - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Inverse Entailment in Nonmonotonic Logic Programs

Description:

Inverse Entailment in Nonmonotonic Logic Programs Chiaki Sakama Wakayama University, Japan – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 28
Provided by: Chia86
Category:

less

Transcript and Presenter's Notes

Title: Inverse Entailment in Nonmonotonic Logic Programs


1
Inverse Entailment in Nonmonotonic Logic Programs
  • Chiaki Sakama
  • Wakayama University, Japan

2
Inverse Entailment
  • B background theory (Horn program)
  • E positive example (Horn clause)
  • B ? H ?? E ? B ?? (H ? E)
  • ? B ?? (?E ? ?H)
  • ? B ? ?E ?? ?H
  • A possible hypothesis H (Horn clause) is obtained
    by B and E.

3
Problems in Nonmonotonic Programs
  • Deduction theorem does not hold.
  • Contraposition is undefined.
  • The present IE technique cannot be used in
    nonmonotonic programs.
  • Reconstruction of the theory is necessary
    in nonmonotonic ILP.

4
Normal Logic Programs (NLP)
  • An NLP is a set of rules of the form
  • A0 ? A1,,Am,not Am1,,not An
  • (Ai atom, not Negation as failure)
  • A Nested Rule is a rule of the form
  • A ? R where R is a rule
  • An LP-literal is L?HB where
  • HBHB ? not A A ?HB

5
Stable Model SemanticsGelfond Lifschitz,88
  • An NLP may have none, one, or multiple stable
    models in general.
  • An NLP having exactly one stable model is called
    categorical.
  • An NLP is consistent if it has a stable model.
  • A stable model coincides with the least model in
    Horn LPs.

6
Problems of Deduction Theorem in NML Shoham87
  • In classical logic, T ?? F holds if F is
    satisfied in every model of T.
  • In NML, T ??NML F holds if F is satisfied in
    every preferred model of T.
  • Proposition 3.1 Let P be an NLP. For any rule R,
    P?? R implies P ??s R, but not vice-versa.

7
Entailment Theorem (1)
  • Theorem 3.2 Let P be an NLP and R a rule
    s.t. P? R is consistent. If P? R ??s
    A, then P ??s A?R for any ground atom A.
  • The converse does not hold in general.
  • ex) P a? not b has the stable model a, then
    P ??s a?b. P?b has the stable model b,
    so P?b ??s a.

8
Entailment Theorem (2)
  • Theorem 3.2 Let P be an NLP and R a rule
    s.t. P? R is consistent. If P ??s A ? R
    and P ??s R, then P? R ??s A
    for any ground atom A.

9
Contraposition in NLP
  • In NLPs a rule is not a clause by the presence of
    NAF.
  • A rule and its contraposition wrt ? and not are
    semantically different.
  • ex) a? has the stable model a,while ? not a
    has no stable model.

10
Contrapositive Rules
  • Given a rule R of the form
  • A0 ? A1,,Am,not Am1,,not An
  • its contrapositive rule cR is defined as
  • not A1 not Am
  • not not Am1 not not An ? not A0
  • where is disjunction and
  • not not A is nested NAF.

11
Properties of Nested NAFLifschitz et al, 99
  • not A1 not Am
  • not not Am1 not not An ? not A0
  • is semantically equivalent to
  • ? A1,,Am,not Am1,,not An ,not A0
  • In particular,
  • not A ? is equivalent to ? A
  • not not A ? is equivalent to ? not A

12
Contrapositive Theorem
  • Theorem 4.1
  • Let P be an NLP, R a (nested) rule, and cR its
    contrapositive rule. Then, P
    ??s R iff P ??s cR

13
Inverse Entailment in NLP
  • Theorem 5.3 Let P be an NLP and R a rule
    s.t. P? R is consistent. For any ground
    LP-literal L, if P? R ??s L and P
    ??s not L , then P? not L ??s not R
    where P? not L is consistent.

14
Induction Problem
  • Given
  • a background KB as a categorical NLP
  • a ground LP-literal L s.t. P ??snot L,
  • LA represents a positive example Lnot A
    represents a negative example
  • a target predicate to be learned

15
Induction Problem
  • Find
  • a hypothetical rule R s.t.
  • P? R ??s L
  • P? R is consistent
  • R has the target predicate in its head

16
Computing Hypotheses
  • When P?R ??s L and P ??snot L ,
  • P? not L ??s not R (by Th5.3)
  • By P ??snot L, it is simplified as
  • P ??s not R
  • Put M ? ??HB and P ??s ?.Then,
  • M ?? not R
  • Let r0?? where ??M. Then,
  • M ?? not r0

17
Relevance
  • Given two ground LP-literals L1 and L2, define
    L1L2 if L1 and L2 have the same predicate and
    contain the same constants.
  • L1 in a ground rule R is relevant to L2 if (i)
    L1L2 or (ii) L1 shares a constant with L3 in R
    s.t. L3 is relevant to L2. Otherwise, L1 is
    irrelevant to L2.

18
Transformations
  1. When r0 ??0 contains any LP-literal irrelevant
    to the example, drop them and put r1??1
  2. When r1??1 contains both an atom A and a
    conjunction B s.t. A?B?P, put r2??1A.
  3. When r2??2 contains not A with the target pred.,
    put r3A??2not A
  4. Construct r4 s.t. r4?r3.

19
Inverse Entailment Rule
  • Theorem 5.5 When Rie is a rule obtained by the
    transformation sequence, M ?? not Rie holds.
  • When P?Rie is consistent, we say
    that Rie is computed by the inverse
    entailment rule.

20
Example(1)
  • P bird(x)? penguin(x),
  • bird(tweety)?, penguin(polly)?.
  • L flies(tweety).
  • target flies
  • Mbird(t),bird(p),penguin(p), not
    penguin(t),not flies(t),not flies(p)

21
Example(1)
  • r0 ? bird(t),bird(p),penguin(p),
    not penguin(t),not flies(t),not flies(p).
  • Dropping irrelevant LP-literals
  • r1 ? bird(t),not penguin(t),not flies(t).
  • Shifting the target to the head
  • r3 flies(t)? bird(t),not penguin(t).
  • Replacing tweety by x
  • r4 flies(x)? bird(x),not penguin(x).

22
Example(2)
  • P flies(x)? bird(x), not ab(x),
  • bird(x)? penguin(x),
  • bird(tweety)?, penguin(polly)?.
  • L not flies(polly).
  • target ab
  • Mbird(t),bird(p),penguin(p), not
    penguin(t),flies(t),flies(p), not
    ab(t),not ab(p).

23
Example(2)
  • r0 ? bird(t),bird(p),penguin(p),
    not penguin(t),not ab(t),not ab(p).
  • Dropping irrelevant LP-literals
  • r1 ? bird(p),penguin(p),not ab(p).
  • Dropping implied atoms in P
  • r2 ? penguin(p),not ab(p).
  • Shifting the target and generalize
  • r4 ab(x)? penguin(x).

24
Properties of the IE rule
  • It is not correct for the computation of rules
    satisfying P?R??sL.
  • ? The meaning of a rule changes by its
    representation form.
  • It is not complete for the computation of rules
    satisfying P?R??sL.
  • ? The transformation sequence filters out
    useless hypotheses.

25
Correctness Issue
  • a ? b, c ?b ? ?a, c
  • ?c ? ?a, b
  • ? ?a, b, c
  • ? Independence of its written form
  • a ? b, c ? not b ? not a, c
  • ? not c ? not a, b
  • ? ? not a, b, c
  • ? Dependence on its written form

26
Completeness Issue
  • B q(a)?, r(b)?
  • r(f(x)) ? r(x)
  • E p(a)
  • H p(x) ? q(x)
  • p(x) ? q(x),r(b)
  • p(x) ? q(x),r(f(b)),
  • ! Infinite number of hypotheses exist in general
    (and many of them are useless)

27
Summary
  • New inverse entailment rule in nonmonotonic ILP
    is introduced.
  • The effects in ILP problems are demonstrated by
    some examples.
  • Future research includes
  • Extension to non-categorical programs (having
    multiple stable models).
  • Exploiting practical applications.
Write a Comment
User Comments (0)
About PowerShow.com