Preference Reasoning in Logic Programming - PowerPoint PPT Presentation


PPT – Preference Reasoning in Logic Programming PowerPoint presentation | free to download - id: 6639df-OGEzM


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Preference Reasoning in Logic Programming


Preference Reasoning in Logic Programming Pierangelo Dell Acqua Aida Vit ria Dept. of Science and Technology - ITN Link ping University, Sweden – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 45
Provided by: Gillia73


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Preference Reasoning in Logic Programming

Preference Reasoning in Logic Programming
Pierangelo DellAcqua Aida Vitória Dept. of
Science and Technology - ITN Linköping
University, Sweden pier,
José Júlio Alferes Luís Moniz Pereira Centro
de Inteligência Artificial - CENTRIA Universidade
Nova de Lisboa, Portugal jja,
  1. Combining updates and preferences
  2. User preference information in query answering
  3. Preferring alternative explanations
  4. Preferring and updating in multi-agent systems

  • JELIA00
  • J. J. Alferes and L. M. Pereira
  • Updates plus Preferences
  • Proc. 7th European Conf. on Logics in Artificial
    Intelligence (JELIA00), LNAI 1919, 2000
  • INAP01
  • P. Dell'Acqua and L. M. Pereira
  • Preferring and Updating in Logic-Based Agents
  • Selected Papers from the 14th Int. Conf. on
    Applications of Prolog (INAP01), LNAI 2543, 2003
  • JELIA02
  • J. J. Alferes, P. Dell'Acqua and L. M. Pereira
  • A Compilation of Updates plus Preferences
  • Proc. 8th European Conf. on Logics in Artificial
    Intelligence (JELIA02), LNAI 2424, 2002
  • FQAS02
  • P. Dell'Acqua, L. M. Pereira and A. Vitoria
  • User Preference Information in Query Answering
  • 5th Int. Conf. on Flexible Query Answering
    Systems (FQAS02), LNAI 2522, 2002

1. Update reasoning
  • Updates model dynamically evolving worlds
  • knowledge, whether complete or incomplete, can be
    updated to reflect world change.
  • new knowledge may contradict and override older
  • updates differ from revisions which are about an
    incomplete static world model.

Preference reasoning
  • Preferences are employed with incomplete
    knowledge when several models are possible
  • preferences act by choosing some of the possible
  • this is achieved via a partial order among rules.
    Rules will only fire if they are not defeated by
    more preferred rules.
  • our preference approach is based on the approach
    of KR98.
  • KR98
  • G. Brewka and T. Eiter
  • Preferred answer sets for extended logic
  • KR98, 1998

Preference and updates combined
  • Despite their differences preferences and updates
    display similarities.
  • Both can be seen as wiping out rules
  • in preferences the less preferred rules, so as to
    remove models which are undesired.
  • in updates the older rules, inclusively for
    obtaining models in otherwise inconsistent
  • This view helps put them together into a single
    uniform framework.
  • In this framework, preferences can be updated.

LP framework
  • Atomic formulae

A objective atom not A default atom
L0 L1 , ... , Ln
generalized rule
every Li is an objective or default atom
  • Let N be a set of constants containing a unique
    name for each generalized rule.

priority rule
Z is a literal r1lt r2 or not r1lt r2
Z L1 , ... , Ln
r1lt r2 means that rule r1 is preferred to rule r2
Def. Prioritized logic program Let P be a set
of generalized rules and R a set of priority
rules. Then ?(P, R) is a prioritized logic
Dynamic prioritized programs
  • Let S1,,s, be a set of states (natural
  • Def. Dynamic prioritized program
  • Let (Pi, Ri) be a prioritized logic program for
    every i?S, then
  • ? ?(Pi, Ri) i?S is a dynamic prioritized
  • The meaning of such a sequence results from
    updating (P1, R1) with the rules from (P2, R2),
    and then updating the result with the rules
    from (Pn, Rn).

Suppose a scenario where Stefano watches programs
on football, tennis, or the news. (1) In the
initial situation, being a typical italian,
Stefano prefers both football and tennis to the
news and, in case of international competitions,
he prefers tennis over football.
r1lt r3 r2lt r3 r2lt r1 us xlty xltz, zlty
f not t, not n (r1) t not f, not n (r2) n
not f, not t (r3)
In this situation, Stefano has two alternative TV
programmes equally preferable football and
(2) Next, suppose that a US-open tennis
competition takes place
us (r4)
Now, Stefano's favourite programme is tennis.
(3) Finally, suppose that Stefano's preferences
change and he becomes interested in international
news. Then, in case of breaking news he will
prefer news over both football and tennis.
not (r1lt r3) bn not (r2lt r3) bn r3lt r1
bn r3lt r2 bn
bn (r5)
Preferred stable models
Let P ?(Pi, Ri) i?S be a dynamic
prioritized program, Q ? Pi U Ri i ? S ,
PR Ui (Pi U Ri), and M an interpretation of
P. Def. Default and Rejected rules Default(PR,M)
not A ?? (AL1,,Ln) in PR and M ? L1,,Ln
Reject(s,M,Q) r ? Pi U Ri ?r? Pj U Rj,
head(r) not head(r), iltj?s and M ? body(r)
  • Def. Unsupported and Unprefered rules
  • Unsup(PR,M) r ? PR M ? head(r) and M ?
  • Unpref(PR,M) is the least set including Unsup(PR,
    M) and every rule r such that
  • r ? (PR Unpref(PR, M))
  • M ? r lt r, M ? body(r) and
  • not head(r)?body-(r) or (not head(r) ?
    body-(r) and M ? body(r))

  • Def. Preferred stable models
  • Let s be a state, P ?(Pi, Ri) i?S a dynamic
    prioritized program, and M a stable model of P.
  • M is a preferred stable model of P at state s iff
  • M least( X - Unpref(X, M) ? Default(PR, M) )
  • where PR Uis(Pi U Ri)
  • Q ?Pi U Ri i?S
  • X PR - Reject(s,M,Q)

?(s,P) transformation
  • Let s be a state and P ?(Pi, Ri) i?S a
    dynamic prioritized program.
  • In Jelia02 we gave a transformation ?(s,P) that
    compiles dynamic prioritized programs into normal
    logic programs.
  • The preference part of our transformation is
    modular or incremental wrt. the update part of
    the transformation.
  • The size of the transformed program ?(s,P) in the
    worst case is quadratic in the size of the
    original dynamic prioritized program P.
  • An implementation of the transformation is
    available at
  • http//

Thm. Correctness of ?(s,P) An interpretation M a
stable model of ?(s,P) iff M, restricted to the
language of P, is a preferred stable model of P
at state s.
2. User preference information in query answering
  • Query answering systems are often difficult to
    use because they do not attempt to cooperate with
    their users.
  • The use of additional information about the user
    can enhance cooperative behaviour of query
    answering systems FQAS02.

  • Consider a system whose knowledge is formalized
    by a prioritized logic program
  • ? (P, R)
  • Extra level of flexibility - if the user can
    provide preference information at query time
  • ?- (G, Pref )
  • Given ?(P,R), the system has to derive G from P
    by taking into account the preferences in R which
    are updated by the preferences in Pref.
  • Finally, it is desirable to make the background
    knowledge (P,R) of the system updatable in a way
    that it can be modified to reflect changes in the
    world (including preferences).

  • The ability to take into account the user
    information makes the system able to target its
    answers to the users goal and interests.
  • Def. Queries with preferences
  • Let G be a goal, ? a prioritized logic program
  • ? ?(Pi, Ri) i?S a dynamic prioritized
  • Then ?- (G,?) is a query wrt. ?

Joinability function
  • S S ? max(S) 1
  • Def. Joinability at state s
  • Let s?S be a state, ? ?(Pi, Ri) i?S a
    dynamic prioritized program and ?(PX, RX) a
    prioritized logic program.
  • The joinability function ?s at state s is
  • ? ?s ? ?(Pi, Ri) i?S
  • (Pi, Ri) if 1 ? i lt s
  • (Pi, Ri) (PX, RX) if i s
  • (Pi-1, Ri-1) if s lt i ? max(S)

Preferred conclusions
Def. Preferred conclusions Let s?S be a state
and ? ?(Pi, Ri) i?S a dynamic prioritized
program. The preferred conclusions of ? with
joinability function ?s are (G,?) G is
included in every preferred stable model of ?
?s ? at state max(S)
Example car dealer
Consider the following program that exemplifies
the process of quoting prices for second-hand
price(Car,200) stock(Car,Col,T), not
price(Car,250), not offer (r1) price(Car,250)
stock(Car,Col,T), not price(Car,200), not offer
(r2) prefer(orange) not prefer(black)
(r3) prefer(black) not prefer(orange)
(r4) stock(Car,Col,T) bought(Car,Col,Date),
Ttoday-Date (r5)
When the company buys a car, the information
about the car must be added to the stock via an

When the company sells a car, the company must
remove the car from the stock
not bought(volvo,black,d2)

The selling strategy of the company can be
formalized as
r2 lt r1 stock(Car,Col,T), T lt 10 r1 lt r2
stock(Car,Col,T), T ? 10, not prefer(Col) r2 lt r1
stock(Car,Col,T), T ? 10, prefer(Col) r4 lt r3
price(Car,200) stock(Car,Col,T), not
price(Car,250), not offer (r1) price(Car,250)
stock(Car,Col,T), not price(Car,200), not offer
(r2) prefer(orange) not prefer(black)
(r3) prefer(black) not prefer(orange)
(r4) stock(Car,Col,T) bought(Car,Col,Date),
Ttoday-Date (r5)
Suppose that the company adopts the policy to
offer a special price for cars at a certain times
of the year.
price(Car,100) stock(Car,Col,T), offer
(r6) not offer
Suppose an orange fiat bought in date d1 is in
stock and offer does not hold. Independently of
the joinability function used
?- ( price(fiat,P), (,) ) P
250 if today-d1 lt 10
P 200 if today-d1 ? 10
?- ( price(fiat,P), (,not (r4 lt r3), r3 lt r4)
) P 250
  • For this query it is relevant which joinability
    function is used
  • if we use ?1, then we do not get the intended
    answer since the user preferences are overwritten
    by the default preferences of the company
  • on the other hand, it is not so appropriate to
    use ?max(S) since a customer could ask

?- ( price(fiat,P), (offer,) )
Selecting a joinability function
In some applications the user preferences in ?
must have priority over the preferences in ?. In
this case, the joinability function ?max(S) must
be used. Example a web-site application of a
travel agency whose database ? maintains
information about holiday resorts and preferences
among touristy locations. When a user asks a
query ?- (G, ?), the system must give priority to
?. Some other applications need the joinability
function ?1 to give priority to the preferences
in ?.
Open issues
  • Detect inconsistent preference specifications.
  • How to incorporate abduction in our framework
    abductive preferences leading to conditional
    answers depending on accepting a preference.
  • How to tackle the problem arising when several
    users query the system together.

3. Preferring abducibles
  • The evaluation of alternative explanations is one
    of the central problems of abduction.
  • An abductive problem of a reasonable size may
    have a combinatorial explosion of possible
    explanations to handle.
  • It is important to generate only the explanations
    that are relevant.
  • Some proposals involve a global criterion
    against which each explanation as a whole can be
  • A general drawback of those approaches is that
    global criteria are generally domain independent
    and computationally expensive.
  • An alternative to global criteria is to allow the
    theory to contain rules encoding domain specific
    information about the likelihood that a
    particular assumption be true.

  • In our approach we can express preferences among
    abducibles to discard the unwanted assumptions.
  • Preferences over alternative abducibles can be
    coded into cycles over negation, and preferring a
    rule will break the cycle in favour of one
    abducible or another.

  • Consider a situation where an agent Peter drinks
    either tea or coffee (but not both). Suppose that
    Peter prefers coffee to tea when sleepy.
  • This situation can be represented by a set Q of
    generalized rules with set of abducibles AQtea,
  • drink tea
  • Q drink coffee
  • coffee C tea sleepy
  • a C b means that abducible a is preferred to
    abducible b

  • In our framework, Q can be coded into the
    following set P of generalized rules with set of
    abducibles AP abduce.
  • drink tea
  • drink coffee
  • coffee abduce, not tea, confirm(coffee) (r1)
  • tea abduce, not coffee, confirm(tea) (r2)
  • P confirm(tea) expect(tea), not
  • confirm(coffee) expect(coffee), not
  • expect(tea)
  • expect(coffee)
  • r1 lt r2 sleepy, confirm(coffee)

  • Having the notion of expectation allows one to
    express the preconditions for an expectation
  • expect(tea) have_tea
  • expect(coffee) have_coffee
  • By means of expect_not one can express situations
    where he does not expect something
  • expect_not(coffee) blood_pressure_high

4. Preferring and updating in multi-agents
  • In INAP01 we proposed a logic-based approach
    to agents that can
  • Reason and react to other agents
  • Prefer among possible choices
  • Intend to reason and to act
  • Update their own knowledge, reactions and goals
  • Interact by updating the theory of another agent
  • Decide whether to accept an update depending on
    the requesting agent

Updating agents
  • Updating agent a rational, reactive agent that
    can dynamically change its own knowledge and
  • makes observations
  • reciprocally updates other agents with goals and
  • thinks a bit (rational)
  • selects and executes an action (reactive)

Preferring agents
  • Preferring agent an agent that is able to prefer
    beliefs and reactions when several alternatives
    are possible.
  • Agents can express preferences about their own
  • Preferences are expressed via priority rules.
  • Preferences can be updated, possibly on advice
    from others.

Agents language
  • Atomic formulae

A objective atoms not A default atoms
?C projects ?C updates
generalized/priority rules
Li is an atom, an update or a negated update
A L1, ... , Ln not A L1 , ... , Ln
Zj is a project
integrity constraint
false L1 , ... , Ln , Z1 , ... , Zm
active rule
L1 , ... , Ln ? Z
Agents knowledge states
  • Knowledge states represent dynamically evolving
    states of agents knowledge. They undergo change
    due to updates.
  • Given the current knowledge state Ps , its
    successor knowledge state Ps1 is produced as a
    result of the occurrence of a set of parallel
  • Update actions do not modify the current or any
    of the previous knowledge states. They only
    affect the successor state the precondition of
    the action is evaluated in the current state and
    the postcondition updates the successor state.

Projects and updates
  • A project jC denotes the intention of some agent
    i of proposing the updating the theory of agent j
    with C.
  • iC denotes an update proposed by i of the
    current theory of some agent j with C.

Fred C
Representation of conflicting information and
  • Preferences may resolve conflicting information.
  • This example models a situation where an agent,
    Fabio, receives conflicting advice from two
    reliable authorities.
  • Let (P,R) be the initial theory of Fabio, where
    R and

dont(A) fa(noA), not do(A) (r1) do(A) ma(A),
not dont(A) (r2) false do(A), fa(noA) false
dont(A), ma(A) r1 lt r2 fr r2 lt r1 mr
fafather advises mamother advises frfather
responsibility mrmother responsibility
  • Suppose that Fabio wants to live alone,
    represented as LA.
  • His mother advises him to do so, but the father
    advises not to do so
  • U1 mother ma(LA), father fa(noLA)
  • Fabio accepts both updates, and therefore he is
    still unable to choose either do(LA) or dont(LA)
    and, as a result, does not perform any action

  • Afterwards, Fabio's parents separate and the
    judge assigns responsibility over Fabio to the
  • U2 judge mr
  • Now the situation changes since the second
    priority rule gives preference to the mother's
    wishes, and therefore Fabio can happily conclude
  • do live alone.

Updating preferences
  • Within the theory of an agent both rules and
    preferences can be updated.
  • The updating process is triggered by means of
    external or internal projects.
  • Here internal projects of an agent are used to
    update its own priority rules.

  • Let the theory of Stan be characterized by
  • workLate not party (r1)
  • P party not workLate (r2)
  • money workLate (r3)
  • r2 lt r1 partying is preferred to working
    until late
  • beautifulWoman ? stan wishGoOut
  • wishGoOut,, not money ? stan getMoney
  • R wishGoOut, money ? beautifulWoman inviteOut
  • getMoney ? stan r1 lt r2
  • getMoney ? stan not r2 lt r1 to get money,
    Stan must update his
  • priority rules