WP7: Empirical Studies - PowerPoint PPT Presentation

About This Presentation
Title:

WP7: Empirical Studies

Description:

... interaction model imposes temporal constraints. Peers have ... Property Checking Framework. interaction. state-space. temporal. properties. deontic. constraints ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 24
Provided by: cisaInfor
Category:
Tags: empirical | studies | wp7

less

Transcript and Presenter's Notes

Title: WP7: Empirical Studies


1
WP7 Empirical Studies
Presenters Paolo Besana, Nardine Osman, Dave
Robertson
2
Outline of This Talk
  • Introduce overall framework
  • Identify four key areas
  • Interaction availability
  • Consistency interaction-peer
  • Consistency peer-peer
  • Consistency with environment

In each of these areas it is impossible to
guarantee the general property we ideally would
require, so the goal of analysis is to identify
viable engineering compromises and explore how
they scale.
3
Basic Conceptual Framework
P1
EP1
M(P,R)
P
EP
Pn
EPn
P process name R role of P M(P,R)
Interaction model for P in role R EP
environment of P
4
Simulation as Clause Rewriting
5
Ensuring Interactions are Available
R?R(P) ? ?(M(P,R)?M(P) ? (i(M(P,R)) ? ?a(M(P,R))))
MP
?
P1
EP1
M(P,R)
P
EP
Pn
EPn
R(P) Roles P wants to undertake MP
Interactions known to P M(P,R) , i(M(P,R))
M(P,R) is initiated a(M(P,R)))) M(P,R) is
completed successfully
6
Specific Question
  • Suppose that the same interaction patterns are
    being used repeatedly in overlapping peer groups.
  • To what extent can basic statistical information
    about success/failure of interaction models solve
    matchmaking problems?

See Deliverable 7.1 for discussion of this
7
Consistency Peer - Interaction Model
A?K(P) ? (B?K(M(P,R)) ? ?B?K(M(P,R))) ? ?(A ? B)
?
K(P)
K(M(P,R))
P1
EP1
M(P,R)
P
EP
Pn
EPn
K(X) Knowledge derivable from X ?(F) F is
consistent
8
Specific Question
  • Each interaction model imposes temporal
    constraints
  • Peers have deontic constraints
  • What sorts of properties required by peers (e.g.
    trust properties) or by interaction modellers
    (e.g. fairness properties) can we test using this
    information alone.

9
Example
  • In an auction, the auctioneer agent wants an
  • interaction protocol that enforces truth telling
  • on the bidders side.
  • A bid(bidder,V)?win(bidder,PV) ?
    bid(bidder,B)?win(bidder,PB) ? B?V ?
    PB?PV
  • where A?K(P)
  • We would like to verify
  • A?K(P) ?(B?K(M(P,R))??B?K(M(P,R))) ?s(A?B)

10
Verifying s(A?B)
  • Verify M(P,R) satisfies A
  • Is A satisfied at state 1?
  • If result is achieved,
  • then terminate
  • else, go to next state(s)
  • and repeat

11
Property Checking Framework
12
Temporal Proof Rules
satisfies(E,tt) ? true satisfies(E,F1?F2) ?
satisfies(E,F1) ? satisfies(E,F2)
satisfies(E,F1?F2) ? satisfies(E,F1) ?
satisfies(E,F2) satisfies(E,ltAgtF) ? ? F.
trans(E,A,F) ? satisfies(F,F) satisfies(E,AF) ?
?F. trans(E,A,F) ? satisfies(F,F) satisfies(E,µZ.F
) ? satisfies(E,F) satisfies(E,?Z.F) ? dual(F,F)
? satisfies(E,F)
13
LCC Transition Rules
trans(ED,A,F) ? trans(D,A,F) trans(E1 or
E2,A,F) ? trans(E1,A,F)?trans(E2,A,F) trans(E1
then E2,A,E2) ? trans(E1,A,nil) trans(E1 then
E2,A,F then E2) ? trans(E1,A,F) ? F ?
nil trans(E1 par E2,A,F par E2) ?
trans(E1,A,F) trans(E1 par E2,A,E1 par F) ?
trans(E2,A,F) trans(M?P,in(M),null) ?
true trans(M?P,out(M),null) ? true trans(E?C,(X),
E) ? X in C ? sat(X) ? sat(C) trans(E?C,A,F) ?
(A ? )?sat(C)?trans(E,A,F)
14
Consistency Peer - Peer
A?K(P) ? Pi?P(M(P,R)) ? B?K(Pi) ? ?(A ? B)
?
K(P)
K(P1)
P1
EP1
M(P,R)
P
EP
Pn
EPn
P(M(P,R)) Peers involved in M(P,R)
15
Specific Question
  • Agents in open environments may have different
    ontologies
  • Guaranteeing complete mappings between them is
    infeasible (ontologies can be inconsistent, can
    cover different domains, etc)
  • Agents are interested in performing tasks
    mapping is required only for the terms contextual
    to the interactions
  • Repetition of tasks provides the basis for
    modelling statistically the contexts of the
    interactions
  • To what extent can interaction models can be used
    to focus the ontology mapping to the relevant
    sections of the ontology?

16
Approach
  • Predicting the possible content of a message
    before processing can help to focus the mapping
  • With no knowledge of the context and of the state
    of an interaction, a received message can be
    anything
  • the context can be used to guess the possible
    content of messages, filtering out unrelated
    elements
  • the guessed content is suggested to the ontology
    mapping engine
  • The entities in a received message mi(e1,...,en)
    are bound by the context of the interaction
  • some entities are specific to the interaction
    type (purchase, request of information,...),
  • the set of possible entities is bound by concepts
    previously introduced in the interaction,
  • different entities may appear in a specific
    message with different frequencies

17
Implementation
Two phases
  • Creating the model
  • Entities appearing in messages are counted,
    obtaining their prior and conditional frequencies
  • Ontological relations between entities in
    different messages are checked and the verified
    relations are counted
  • Predicting the content of a message
  • When a message is received, the probability
    distribution for all the terms is computed using
    the collected information and the current state
    of the interaction
  • The most probable terms form the set of
    suggestions for the ontology mapping engine

The aim is to obtain the smallest possible set
that is most likely to contain the entities
actually used in the message.
18
Mapping Evaluation Framework
19
Testing
  • Interactions are abstract protocols, and agents
    have generated ontologies
  • allows us to simulate different types of
    relations between the messages
  • Community preferences over elements (best
    sellers, etc) are simulated by probability
    distributions
  • Interactions are run automatically hundreds of
    times
  • Results are compared with a uniform distribution
    of the entities (simulates no knowledge about
    context)
  • Equivalent size for same success rate
  • Equivalent success rate for same size of
    suggestion set

20
Provisional Results
  • After 100 interactions, the predictor is able to
    provide a set smaller than 7 of the ontology
    size containing, 70 of the time, the term
    actually used in message m2
  • If all terms are equiprobable, the probability is
    directly proportional to the size of the
    (randomly picked) set, as shown above.

21
Consistency Peer - Environment
A?K(P) ? B?K(EP) ? ?(A ? B)
?
K(EP)
K(P)
P1
EP1
M(P,R)
P
EP
Pn
EPn
22
Specific Question
  • Suppose we have a complex environment with
    adversorial agents
  • For specific goals, how complex do interaction
    models need to be in order to raise group
    performance significantly?

23
Environment Simulation Framework
Coordinating peer
Interaction model
Simulated agents
Environment simulator
a(hunter,Id) sawHimAt(Location) gt
a(hunter,RID) ?
visiblePlayer(Location) and
strafeAttempt(Location,Location) or
strafeAttempt(Location,Location) ?
sawHimAt(Location) lt a(hunter,RID) or
movementAttempt(random_play)
You can be a hunter if you send a message
revealing the location of a visible
opponent player upon whom you are
making a strafing attack or make a
strafing attack on a location if you
have been told a player is there or
otherwise just do what seems right
random
coordinated
Comparative performance
Group convergence
Write a Comment
User Comments (0)
About PowerShow.com