AAMAS08 Tutorial 2: Computational Trust and Reputation Models Dr' Guillaume Muller Dr' Laurent Verco - PowerPoint PPT Presentation

1 / 138
About This Presentation
Title:

AAMAS08 Tutorial 2: Computational Trust and Reputation Models Dr' Guillaume Muller Dr' Laurent Verco

Description:

MAIA Intelligent Autonomous Machine ... G2I Division for Industrial ... A low-reputed agent can improve its status at the same rate as a beginner ... – PowerPoint PPT presentation

Number of Views:263
Avg rating:3.0/5.0
Slides: 139
Provided by: gmullerjsa
Category:

less

Transcript and Presenter's Notes

Title: AAMAS08 Tutorial 2: Computational Trust and Reputation Models Dr' Guillaume Muller Dr' Laurent Verco


1
AAMAS08 Tutorial 2 Computational Trust and
Reputation Models Dr. Guillaume Muller
Dr. Laurent Vercouter
7th International Conference on Autonomous Agents
Multi-Agent Systems
2
Dr. Laurent Vercouter
G2I Division for Industrial Engineering and
Computer Sciences EMSE Ecole des Mines of
St-Etienne
Dr. Guillaume Muller
MAIA Intelligent Autonomous Machine INRIA
LORIA Laboratory of IT Research and its
Applications
3
Presentation outline
  • Motivation
  • Approaches to control the interaction
  • Some definitions
  • The computational perspective
  • Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret
  • ART
  • The testbed
  • An example

4
Motivation
5
What we are talking about...
Mr. Yellow
6
What we are talking about...
Two years ago...
Trust based on...
Direct experiences
Mr. Yellow
7
What we are talking about...
Trust based on...
Third party information
Mr. Yellow
8
What we are talking about...
Trust based on...
Third party information
Mr. Yellow
9
What we are talking about...
Trust based on...
Reputation
Mr. Yellow
10
What we are talking about...
Mr. Yellow
11
What we are talking about...
12
Advantages of trust and reputation mechanisms
  • Each agent is a norm enforcer and is also under
    surveillance by the others. No central authority
    needed.
  • Their nature allows to arrive where laws and
    central authorities cannot.
  • Punishment is based usually in ostracism.

13
Problems of trust and reputation mechanisms
  • Bootstrap problem.
  • Exclusion must be a punishment for the outsider.
  • Not all kind of environments are suitable to
    apply these mechanisms.

14
Approaches to control the interaction
15
Different approaches to control the interaction
Security approach
16
Different approaches to control the interaction
  • Security approach

Agent identity validation. Integrity,
authenticity of messages. ...
Im Alice
17
Different approaches to control the interaction
Institutional approach
Security approach
18
Different approaches to control the interaction
  • Institutional approach

19
Different approaches to control the interaction
Social approach
Institutional approach
Security approach
20
Example P2P systems
21
Example P2P systems
22
Example P2P systems
23
Different approaches to control the interaction
Social approach
Trust and reputation mechanisms are at this level.
Institutional approach
Security approach
They are complementary and cover different
aspects of interaction.
24
Definitions
25
Trust
Some statements we like Trust begins where
knowledge ends trust provides a basis dealing
with uncertain,complex,and threatening images of
the future. Luhmann,1979 Trust is the
outcome of observations leading to the belief
that the actions of another may be relied upon,
without explicit guarantee, to achieve a goal in
a risky situation. Elofson, 2001 There are
no obvious units in which trust can be measured,
Dasgupta, 2000
26
Trust
  • There are many ways of considering Trust.
  • Trust as Encapsulated Interest Russell Hardin,
    2002

I trust you because I think it is in your
interest to take my interests in the relevant
matter seriously. And this is because you value
the continuation of our relationship. You
encapsulate my interests in your own interests.
27
Trust
  • There are many ways of considering Trust.
  • Instant trust

Trust is only a matter of the characteristics of
the trusted, characteristics that are not
grounded in the relationship between the truster
and the trusted. Example
Rug merchant in a bazaar
28
Trust
  • There are many ways of considering Trust.
  • Trust as Moral

Trust is expected, and distrust or lack of trust
is seen as a moral fault. One migh argue that
to act as though I do trust someone who is not
evidently (or not yet) trustworthy is to
acknowledge the persons humanity and
possibilities or to encourage the persons
trustworthiness. Russel Hardin, 2002
29
Trust
  • There are many ways of considering Trust.
  • Trust as Noncognitive

Trust based on affects, emotions... To say that
we trust on other in a non cognitive way is to
say that we are disposed to be trustful of them
independently of our beliefs or expetations about
their trustworthiness Becker 1996
  • Trust as Ungrounded Faith

Notice here there is a power relation between the
truster and the trusted.
  • Example
  • infant towards her parents
  • follower towards his leader

30
Trust
There are many ways of considering Trust. And
therefore, many definitions of Trust. Conceptual
morass Barber, 83 Confusing pot-pourri
Shapiro, 87 Just leave this to philosophers,
psycologists and sociologists... ...but lets
have an eye on it.
31
Reputation
  • Some definitions
  • The estimation of the consistency over time of
    an attribute or entity Herbig et al.
  • Information that individuals receive about the
    behaviour of their partners from third parties
    and that they use to decide how to behave
    themselves Buskens, Coleman...
  • The expectation of future opportunities arising
    from cooperation Axelrod, Parkhe
  • The opinion others have of us

32
Computational perspective
33
Computational trust
  • Castelfranchi Falcone make a clear distinction
    between
  • Trust as an evaluative belief
  • A truster agent believes that the trustee is
    trustful
  • e.g. I believe that my doctor is a good surgeon
  • Trust as a mental attitude
  • A truster agent relies on a trustee for a given
    behaviour
  • e.g. I accept that my doctor makes a surgical
    operation on me

34
Trust as a belief
  • A truster i trusts a trustee j to do an action ?
    in order to achieve a goal ? Castelfranchi
    Falcone
  • Agent i has the goal ?
  • Internal attribution of trust
  • i believes that j intends to do ?
  • External attribution of trust
  • i believes that j is capable to do ?
  • i believes that j has the power to achieve ? by
    doing ?
  • The goal component can be generalized to consider
    norm-obedience. Demolombe Lorini

35
Occurrent trust
  • Occurent trust happens when a truster believes
    that the trustee is going to act here and now
    Herzig et al, 08.

OccTrust(i, j, ?, ?) Goal(i, ?) ? Believes(i,
OccCap(j, ?)) ? Believes(i, OccPower(j, ? , ?))
? Believes(i, OccIntends(j, ?))
36
Dispositional trust
  • Dispositional trust happens when a truster
    believes that the trustee is going to act
    whenever some conditions are satisfied Herzig et
    al, 08.

DispTrust(i, j, ?, ?) PotGoal(i, ?)
? Believes(i, CondCap(j, ?)) ? Believes(i,
CondPower(j, ? , ?)) ? Believes(i,
CondIntends(j, ?))
37
Trust and delegation
  • Trust (as a belief) can lead to delegation, when
    the truster i relies on the trustee j.
  • Weak delegation
  • j is not aware of the fact that i is exploiting
    his action
  • Strong delegation
  • i elicits or induces js expected behaviour to
    exploit it
  • There can be trust without delegation
    (insufficient trust, prohibitions)
  • There can be delegations without trust (no
    information, obligations)

38
Computational reputation
  • Reputation adds a collective dimension to the
    truster.
  • Reputation is an objective social property that
    emerges from a propagating cognitive
    representation Conte Paolucci. This
    definition includes both
  • a process of transmitting a targets image
  • a cognitive representation resulting from this
    propagation

39
The Functional Ontology of Reputation Casare
Sichman, 05
  • The Functional Ontology of Reputation (FORe) aims
    at defining standard concepts related to
    reputation
  • FORe includes
  • Reputation processes
  • Reputation types and natures
  • Agent roles
  • Common knowledge (information sources, entities,
    time)
  • Facilitate the interoperability of heterogeneous
    reputation models

40
Reputation processes
  • Reputation transmission / reception
  • An agent sends/receive a reputation information
    to/from another one
  • Reputation evaluation
  • Production of a reputation measurement that can
    contain several valued attributes (content
    evaluation) or an unexplained estimation (esteem
    level). Values can be quantitative or
    qualitative.
  • Reputation maintenance
  • The reputation alterations over time that can
    take into account the incremental impact of
    agents behavior (aggregation) or the history of
    behaviors (historical process)

41
Agent roles Conte Paolucci, 02
42
Reputation types Mui, 02
  • Primary reputation
  • Direct reputation
  • Observed reputation
  • Secondary reputation
  • Collective reputation
  • Propagated reputation
  • Stereotyped reputation

43
What is a good trust model?
  • A good trust model should be Fullam et al, 05
  • Accurate
  • provide good previsions
  • Adaptive
  • evolve according to behaviour of others
  • Quickly converging
  • quickly compute accurate values
  • Multi-dimensional
  • Consider different agent characteristics
  • Efficient
  • Compute in reasonable time and cost

44
Why using a trust model in aMAS ?
Bob
  • Trust models allow
  • Identifying and isolating untrustworthy agents

45
Why using a trust model in aMAS ?
  • Trust models allow
  • Identifying and isolating untrustworthy agents
  • Evaluating an interactions utility

I can sell you the information you require
Bob
46
Why using a trust model in aMAS ?
  • Trust models allow
  • Identifying and isolating untrustworthy agents
  • Evaluating an interactions utility
  • Deciding whether and with whom to interact

I can sell you the information you require
Charles
I can sell you the information you require
Bob
47
Presentation outline
  • Motivation
  • Approaches to control de interaction
  • Some definitions
  • The computational perspective
  • Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret
  • ART
  • The testbed
  • Example

48
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

49
OpenPGP model Adbul-Rahman, 97
  • Context replace the centralized Trusted
    Authorities in Public Key management

Authority
Certification
trusts
certifies
Authority
Bobs ID
signs
(message)key
Bob
Alice
Bobs pubkey
50
OpenPGP model Adbul-Rahman, 97
  • Context replace the centralized Trusted
    Authorities in Public Key management

Authority
Web of Trust
trusts
certifies
certifies
message
message
Bob
Alice
51
OpenPGP model Adbul-Rahman, 97
  • 2 kinds of trusts
  • Tc Trust in the certificate undefined,marginal,
    complete
  • Ti Trust as an introducer untrustworthy,margina
    l,full,dontknow
  • OpenPGP computes reputations based on
    transitivity along all existing pathes
  • gtX complete OR gtY marginal ? c
  • ? gt0 marginal (or gt) ? m
  • Humans set all Ti some Tc and take decisions
  • Parameters are
  • X min. number of complete
  • Y min. number of marginal
  • length of the trust pathes.

Tc
Tc
Ti
Web of Trust
certifies
Ti
message
Ti
52
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

53
Marshs model Marsh, 94
  • Context collaborative work
  • Addresses only direct interactions, does not
    consider gossips
  • Two kinds of trust
  • General Trust Tx(y), Trust of x in y in general
  • Situational Trust Tx(y, ?x), contextualized
    trust
  • Trust is modelled as a probability,in fact a
    value in 0,1)
  • Computation
  • Tx(y) average of the Tx(y,?x), in all possible
    contexts
  • Tx(y,?x) Tx(y, ?x) Ux(?x) Ix(?x) Tx(y)
  • Decision to trust
  • Tx(y,?x) ? CooperThresholdx(?x) ?
    WillCooper(x,y,?x)
  • CooperThreshold depends on the risks, perceived
    competence, importance

54
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

55
eBay model
  • Context e-commerce
  • Model oriented to support trust between buyer
    and seller
  • Buyer has no physical access to the product of
    interest
  • Seller or buyer may decide not to commit the
    transaction
  • Centralized all information remains on eBay
    Servers

56
eBay model
  • Buyers and sellers evaluate each other after
    transactions
  • The evaluation is not mandatory and will never
    be removed
  • evaluation a comment a rating
  • comment a line of text
  • rating numeric evaluation in -1,0,1
  • Each eBay member has a reputation (feedback
    score) that is the summation of the numerical
    evaluations.

57
eBay model
58
eBay model
59
eBay model
  • Specifically oriented to scenarios with the
    following characteristics
  • A lot of users (we are talking about milions)
  • Few chances of repeating interaction with the
    same partner
  • Human oriented
  • Considers reputation as a global property and
    uses a single value that is not dependent on the
    context.
  • A great number of opinions that dilute false
    or biased information is the only way to increase
    the reliability of the reputation value.

60
eBay model
  • Advantages
  • Used everyday
  • In a real life application
  • Very simple
  • Limits Dellarocas, 0001 Steiner, 03
  • Fear of reciprocity
  • What is the semantic of a high reputation?
  • Problem of electronic commerce change of
    identity
  • The textual comment makes the efficiency
  • Few public papers, evolves frequently

61
OnSale model
  • OnSale specialized on computer-related stuff
  • Newcomers
  • OnSale no reputation
  • eBay zero feedback points (lowest reputation)
  • Bidders
  • OnSale not rated at all, register with credit
    card
  • eBay are rated, used internally, bought PayPal

62
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

63
SPORAS HISTOS Zacharias et al., 99
  • Context e-commerce, similar to eBay
  • Reputations are faceted an individual may enjoy
    a very high reputation in one domain, while she
    has a low reputation in another.
  • Two models are proposed
  • Sporas works even with few ratings
  • Histos assumes abundance of ratings
  • Deterrent for agents to change their IDs
  • Reputations can decrease, but it will never fall
    below a newcomer's value
  • A low-reputed agent can improve its status at
    the same rate as a beginner
  • Ratings given by users with a high reputation
    are weighted more
  • Measure against end-of-game strategies
  • Reputation values are not allowed to increase at
    infinitum

64
SPORAS
1. Reputations are in 0, 3000. Newcommers 0.
Ratings are in 0.1, 1 2. Reputations never get
below 0, even in the case of very bad
behaviours 3. After each rating the reputation
is updated 4. Two users may rate each other only
once more than one interaction gt most recent
rating considered . 5. Higher reputations are
updated more moderatetly
Memory of The system
Reputation of the rater
Normalized prev. reputation
Dumping factor
Current rating
previous reputation
65
Histos
  • Aim compute a global personalized reputation
    (PRp) value for each member
  • personalized reputation is computed by
    transitivity
  • Find all directed paths from A to ALwith length
    ? N
  • Keep only the ? most recent ones
  • Start a backward recursion
  • If path length 1,PRp rating
  • If path length gt 1,PRp f(RatersPRp,rating)

66
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

67
Trust Net Schillo Funk, 99
  • Model designed to evaluate the agents honesty
  • Completely decentralized
  • Applied in a game theory context the Iterated
    Prisonners Dilemma (IPD)
  • Each agent announces its strategy and choose an
    opponent according to its announced strategy
  • If an agent does not follow the strategy it
    announced, its opponent decreases its reputation
  • The trust value of agent A towards agent B is
  • T(A,B) number of honest rounds / number of
    total rounds

68
Trust Net Schillo Funk, 99
  • Agents can communicate their trust values to
    fasten the convergence of trust models
  • An agent can build a Trust Net of trust values
    transmitted by witnesses
  • The final trust value of an agent towards another
    aggregates direct experiences and testimonies
    with a probabilistic function on the lying
    behaviour of witnesses, which reduces the
    correlated evidence problem.

.65
1.0
0.25
0.8
0.7
0.2
  • Binary evaluation
  • Annouced behaviour

69
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

70
Fuzzy models Rehák, 05
  • Trust modelled as a type-2 fuzzy set
  • Iterative building of the fuzzy set
  • Estimate the subjective utility of the
    cooperation
  • Compute the rating of 1 agent based on this
    utility
  • Flat
  • Proportional distribution (trust of
    Autility)/(trust of avg agent)
  • Fuzzy set membership function on sets of
    ratings

71
Fuzzy models Rehák, 05
Trust Decision
72
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

73
The LIAR model Muller Vercouter, 08
  • Model designed for the control of communications
    in a P2P network
  • Completely decentralized
  • Applied to a peer-to-peer protocol for query
    routings
  • The global functionning of a p2p network relies
    on an expected behaviour of several nodes (or
    agents)
  • Agents behaviour must be regulated by a social
    control Castelfranchi, 00

74
The LIAR model Muller Vercouter, 07
75
The LIAR model Muller Vercouter, 07
76
The LIAR model Muller Vercouter, 07
77
The LIAR model Muller Vercouter, 07
78
The LIAR model Muller Vercouter, 07
79
The LIAR model Muller Vercouter, 07
80
The LIAR model Muller Vercouter, 07
81
LIAR Social control of agent communications
Social Control
Definition of Acceptability (Social norms)
Trust intentions
(Reputations)
Sanction
Representation (Social commitments)
Interactions
82
LIAR Social commitments and norms
Social Commitment example
Debtor (sender)
Content
Utterance time
Observer
Creditor (receiver)
State
83
LIAR Social commitments and norms
Social Norm example
Prohibition
Punishers
Targets
Evaluators
Condition
Content
84
The LIAR agent architecture
Reputations
Interactions
85
LIAR partial observation
Agent D
Agent C
inform(p)
Agent A
Agent B
86
LIAR partial observation
D CSAB
C CSAB
sc(A,B,8pm,8pm-9pm,active,p)
A CSAB
B CSAB
87
LIAR partial observation
D CSAB
C CSAB
cancel(p)
A CSAB
B CSAB
88
Detection of violations
Evaluator
Propagator
observations(ob)
social commitment update
social policy generation
social policy evaluation
proof receivediteration
Justification Protocol
89
Reputation types in LIAR
  • Rptarget (facet,dimension,time) ? -1,1
    ? unknown

beneficiary
  • 7 different roles
  • target
  • participant
  • observator
  • evaluator
  • punisher
  • beneficiary
  • propagator
  • 5 reputation types based on
  • direct interaction
  • indirect interaction
  • recommendation about observation
  • recommendation about evaluation
  • recommendation about reputation

90
Reputation computation
  • Direct Interaction based Reputation
  • Separate the social policies according to their
    state
  • associate a penalty to each set
  • reputation weighted average of the penalties
  • Reputation Recommendation based Reputation
  • based on trusted recommendation
  • reputation weighted average of received values
  • weighted by the reputation of the punisher

91
LIAR decision process
Trust_int trust
ObsRcbRp
EvRcbRp
RpRcbRp
ObsRcbRp
DIbRp
GDtT
()
()
()
()
()
Trust_int distrust
() -gt (unknown) or not relevant or not
discriminant
92
LIAR conclusion
  • LIAR is adapted to P2P infrastructures
  • Partial observations/incomplete information
  • Scalable
  • Applied in a GNUtellalike network ? malicious
    nodes are excluded
  • LIAR is fine-grained
  • Different types of reputation maintained
    separately
  • multi-facet and multi-dimension
  • LIAR covers the whole loop of social control
  • evaluation of a single behaviour ? decision to
    act in trust.

93
Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • Sporas Histos
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret

94
ReGreT
What is the ReGreT system? It is a modular trust
and reputation system oriented to complex
e-commerce environments where social relations
among individuals play an important role.
95
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
96
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
97
Outcomes and Impressions
  • Outcome
  • The initial contract
  • to take a particular course of actions
  • to establish the terms and conditions of a
    transaction.
  • AND
  • The actual result of the contract.

Example
Prize c 2000 Quality c A Quantity c 300
Contract
Outcome
Prize f 2000 Quality f C Quantity f 295
Fulfillment
98
Outcomes and Impressions
99
Outcomes and Impressions
  • Impression
  • The subjective evaluation of an outcome from a
    specific point of view.

Outcome
Prize c 2000 Quality c A Quantity c 300
Prize f 2000 Quality f C Quantity f 295
100
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
101
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
102
Witness reputation
  • Reputation that an agent builds on another agent
    based on the beliefs gathered from society
    members (witnesses).
  • Problems of witness information
  • Can be false.
  • Can be incomplete.
  • Correlated evidence problem Pearl, 88.
  • Functionning
  • Find Witnesses
  • Direct relation with target
  • Use of sociograms (cut-points and central points)
  • Weight each recommendation with the credibility
  • Advantages
  • Minimizes the correlated evidence problem.
  • Reduces the number of queries

103
Credibility model
  • Two methods are used to evaluate the
    credibility of
  • witnesses

Credibility (witnessCr)
104
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
105
Neighbourhood reputation
  • The trust on the agents that are in the
    neighbourhood of the target agent and their
    relation with it are the elements used to
    calculate what we call the Neighbourhood
    reputation.

ReGreT uses fuzzy rules to model this reputation.
IF is X AND coop(b, ) low THEN
is X
IF is X AND coop(b, ) is Y
THEN is T(X,Y)
106
The ReGreT system
ODB
IDB
SDB
Credibility
Witness reputation
Neigh- bourhood reputation
Reputation model
Direct Trust
System reputation
Trust
107
System reputation
  • The idea behind the System reputation is to use
    the common knowledge about social groups and the
    role that the agent is playing in the society as
    a mechanism to assign reputation values to other
    agents.
  • The knowledge necessary to calculate a system
    reputation is usually inherited from the group or
    groups to which the agent belongs to.

108
Trust decision
  • If the agent has a reliable direct trust value,
    it will use that as a measure of trust. If that
    value is not so reliable then it will use
    reputation.

109
Conclusions
  • Computational trust and reputation models are an
    essential part of autonomous social agents. It is
    not possible to talk about social agents without
    considering trust and reputation.
  • Current trust and reputation models are still
    far from covering the necessities of an
    autonomous social agent.
  • We have to change the way the trust and
    reputation system is considered in the agent
    architecture.

110
Conclusions
  • Tight integration with the rest of the modules
    of the agent and proactivity are necessary to
    transform the trust and reputation system in a
    useful tool that be able to solve the kind of
    situations a real social agent will face in
    virtual societies.
  • To achieve that, more collaboration with other
    artificial intelligence areas is needed.

111
Presentation outline
  • Motivation
  • Approaches to control de interaction
  • Some definitions
  • The computational perspective
  • Computational trust and reputation models
  • OpenPGP
  • Marsh
  • eBay/OnSale
  • SPORAS HISTOS
  • TrustNet
  • Fuzzy Models
  • LIAR
  • ReGret
  • ART
  • The testbed
  • Example

112
The Agent Reputation and Trust Testbed
113
Motivation
  • Trust in MAS is a young field of research,
    experiencing breadth-wise growth
  • Many trust-modeling technologies
  • Many metrics for empirical validation
  • Lack of unified research direction
  • No unified objective for trust technologies
  • No unified performance metrics and benchmarks

114
An Experimental and Competition Testbed
  • Presents a common challenge to the research
    community
  • Facilitates solving of prominent research
    problems
  • Provides a versatile, universal site for
    experimentation
  • Employs well-defined metrics
  • Identifies successful technologies
  • Matures the field of trust research
  • Utilizes an exciting domain to attract attention
    of other researchers and the public

115
The ART Testbed
  • A tool for
  • Experimentation Researchers can perform
    easily-repeatable experiments in a common
    environment against accepted benchmarks
  • Competitions Trust technologies compete against
    each other the most promising technologies are
    identified

116
Testbed Game Rules
If an appraiser is not very knowledgeable about a
painting, it can purchase "opinions" from other
appraisers.
For a fixed price, clients ask appraisers to
provide appraisals of paintings from various eras.
Agents function as art appraisers with varying
expertise in different artistic eras.
Opinions and Reputations
Client Share
Appraisers whose appraisals are more accurate
receive larger shares of the client base in the
future.
Appraisers can also buy and sell reputation
information about other appraisers.
Appraisers compete to achieve the highest
earnings by the end of the game.
117
Step 1 Client and Expertise Assignments
  • Appraisers receive clients who pay a fixed price
    to request appraisals
  • Client paintings are randomly distributed across
    eras
  • As game progresses, more accurate appraisers
    receive more clients (thus more profit)

118
Step 2 Reputation Transactions
  • Appraisers know their own level of expertise for
    each era
  • Appraisers are not informed (by the simulation)
    of the expertise levels of other appraisers
  • Appraisers may purchase reputations, for a fixed
    fee, from other appraisers
  • Reputations are values between zero and one
  • Might not correspond to appraisers internal
    trust model
  • Serves as standardized format for inter-agent
    communication

119
Step 2 Reputation Transactions
Requester sends request message to a potential
reputation provider, identifying appraiser whose
reputation is requested
Provider
Requester
  • Potential reputation provider sends accept
    message

Requester sends fixed payment to the provider
Provider sends reputation information, which may
not be truthful
120
Step 3 Certainty Opinion Transactions
  • For a single painting, an appraiser may request
    opinions (each at a fixed price) from as many
    other appraisers as desired
  • The simulation generates opinions about
    paintings for opinion-providing appraisers
  • Accuracy of opinion is proportional to opinion
    providers expertise for the era and cost it is
    willing to pay to generate opinion
  • Appraisers are not required to truthfully reveal
    opinions to requesting appraisers

121
Step 3 Certainty Opinion Transactions
Potential provider sends a certainty assessment
about the opinion it can provide for this era -
Real number (0 1) - Not required to truthfully
report certainty assessment
Requester sends certainty request message to
potential providers, identifying an era
Provider
Requester
Requester sends opinion request messages to
potential providers, identifying a painting
Provider sends opinion, which may not be truthful
and receive a fixed payment
122
Step 4 Appraisal Calculation
  • Upon paying providers and before receiving
    opinions, requesting appraiser submits to
    simulation a weight (self-assessed reputation)
    for each other appraiser
  • Simulation collects opinions sent to appraiser
    (appraisers may not alter weights or received
    opinions)
  • Simulation calculates final appraisal as
    weighted average of received opinions
  • True value of painting and calculated final
    appraisal are revealed to appraiser
  • Appraiser may use revealed information to revise
    trust models of other appraisers

123
Analysis Metrics
  • Agent-Based Metrics
  • Money in bank
  • Average appraisal accuracy
  • Consistency of appraisal accuracy
  • Number of each type of message passed
  • System-Based Metrics
  • System aggregate bank totals
  • Distribution of money among appraisers
  • Number of messages passed, by type
  • Number of transactions conducted
  • Evenness of transaction distribution across
    appraisers

124
Conclusions
  • The ART Testbed provides a tool for both
    experimentation and competition
  • Promotes solutions to prominent trust research
    problems
  • Features desirable characteristics that
    facilitate experimentation

125
An example of using ART
  • Building an agent
  • creating a new agent class
  • strategic methods
  • Running a game
  • designing a game
  • running the game
  • Viewing the game
  • Running a game monitor interface

126
Building an agent for ART
  • An agent is described by 2 files
  • a Java class (MyAgent.java)
  • must be in the testbed.participant package
  • must extend the testbed.agent.Agent class
  • an XML file (MyAgent.xml)
  • only specifying the agent Java class in the
    following way
  • ltagentConfiggt
  • ltclassFilegt
  • c\ARTAgent\testbed\participants\MyAgent.class
  • lt/classFilegt
  • lt/agentConfiggt

127
Strategic methods of the Agent class (1)
  • For the beginning of the game
  • initializeAgent()
  • To prepare the agent for a game
  • For reputation transactions
  • prepareReputationRequests()
  • To ask reputation information (gossips) to other
    agents
  • prepareReputationAcceptsAndDeclines()
  • To accept or refuse requests
  • prepareReputationReplies()
  • To reply to confirmed requests

128
Strategic methods of the Agent class (2)
  • For certainty transactions
  • prepareCertaintyRequests()
  • To ask certainty about eras to other agents
  • prepareCertaintyReplies()
  • To announce its own certainty about eras to
    requesters
  • For opinion transactions
  • prepareOpinionRequests()
  • To ask opinion to other agents
  • prepareOpinionCreationOrders()
  • To produce evaluations of paintings
  • prepareOpinionReplies()
  • To reply to confirmed requests
  • prepareOpinionProviderWeights()
  • To weight the opinion of other agents

129
The strategy of this example of agent
  • We will implement an agent with a very simple
    reputation model
  • It associates a reputation value to each other
    agent (initialized at 1.0)
  • It only sends opinion requests to agents with
    reputation gt 0.5
  • No reputation requests are sent
  • If an appraisal of another agent is different
    from the real value by less than 50, reputation
    is increased by 0.03
  • Otherwise it is decreased by 0.03
  • If our agent receives a reputation request from
    another agent with a reputation less than 0.5, it
    provides a bad appraisal (cheaper)
  • Otherwise its appraisal is honest

130
Initialization
The agent class is extended
Reputation values are assigned to every agent
131
Opinion requests
Opinion requests are only sent to agents with a
reputation over 0.5
132
Opinion Creation Order
If a requester has a bad reputation value, a
cheap and bad opinion is created For it.
Otherwise It is an expensive and accurate one
133
Updating reputations
According to the difference between opinions and
real painting values, Reputations are increased
or decreased
134
Running a game with MyAgent
  • Parameters of the game
  • 3 agents MyAgent, HonestAgent, CheaterAgent
  • 50 time steps
  • 4 painting eras
  • average client share 5 / agent

135
How did my agent behaved ?
136
References
  • AbdulRahman, 97 A. Abdul-Rahman. The PGP trust
    model. EDI-Forum the Journal of Electronic
    Commerce, 10(3)2731, 1997.
  • Barber, 83 B. Barber, The Logic and Limits of
    Trust, The meanings of trust Technical
    competence and fiduciary responsibility, Rutgers
    University Press, Rutgers, NJ, United States of
    America, 1983, p. 7-25.
  • Carbo et al., 03 J. Carbo and J. M. Molina and
    J. Dávila Muro, Trust Management Through Fuzzy
    Reputation, International Journal of Cooperative
    Information Systems, 2003, vol. 121, p. 135-155.
  • Casare Sichman, 05 S. J. Casare and J. S.
    Sichman, Towards a functional ontology of
    reputation, Proceedings of AAMAS05, 2005.
  • Castelfranchi, 00 C. Castelfranchi, Engineering
    Social Order, Proceedings of ESAW00, 2000.
  • Castelfranchi Falcone, 98 C. Castelfranchi
    and R. Falcone, Principles of trust for MAS
    Cognitive anatomy, social importance and
    quantification. Proc of ICMAS98, pages 72-79,
    1998.
  • Conte Paolucci, 02 R. Conte and M. Paolucci,
    Reputation in Artificial Societies. Social
    Beliefs for Social Order, Kluwer Academic
    Publishers, G. Weiss (eds), Dordrecht, The
    Netherlands, 2002.
  • Dellarocas, 00 C. Dellarocas, Immunizing online
    reputation reporting systems against unfair
    ratings and discriminatory behavior, p. 150-157,
    Proceedings of the ACM Conference on "Electronic
    Commerce" (EC'00), October, ACM Press, New York,
    NY, United States of America, 2000.
  • Dellarocas, 01 C. Dellarocas, Analyzing the
    economic efficiency of eBay-like online
    reputation reporting mechanisms, p. 171-179,
    Proceedings of the ACM Conference on "Electronic
    Commerce" (EC'01), October, ACM Press, New York,
    NY, United States of America, 2001.
  • Demolombe Lorini, 08 R. Demolombe and E.
    Lorini, Trust and norms in the context of
    computer security a logical formalization. Proc
    of DEON08, LNAI, 1998.

137
References
  • Fullam et al, 05 K. Fullam, T. Klos, G. Muller,
    J. Sabater-Mir, A. Schlosser, Z. Topol, S.
    Barber, J. Rosenschein, L. Vercouter and M. Voss,
    A Specification of the Agent Reputation and Trust
    (ART) Testbed Experimentation and Competition
    for Trust in Agent Societies, Proceedings of
    AAMAS05, 2005.
  • Herzig et al, 08 A. Herzig, E. Lorini, J. F.
    Hubner, J. Ben-Naim, C. Castelfranchi, R.
    Demolombe, D. Longin and L. Vercouyter.
    Prolegomena for a logic of trust and reputation,
    submitted to Normas 08.
  • Luhmann, 79 N. Luhmann, Trust and Power, John
    Wiley \ Sons, 1979.
  • McKnight Chervany, 02 D. H. McKnight and N.
    L. Chervany, What trust means in e-commerce
    customer relationship an interdisciplinary
    conceptual typology, International Journal of
    Electronic Commerce, 2002.
  • Mui et al., 02 L. Mui and M. Mohtashemi and A.
    Halberstadt, Notions of Reputation in Multi-agent
    Systems A Review, Proceedings of Autonomous
    Agents and Multi-Agent Systems (AAMAS'02), p.
    280-287, 2002, C. Castelfranchi and W.L. Johnson
    (eds), Bologna, Italy, July, ACM Press, New York,
    NY, United States of America.
  • Muller Vercouter, 05 G. Muller and L.
    Vercouter, Decentralized Monitoring of Agent
    Communication with a Reputation Model, Trusting
    Agents for trusting Electronic Societies, LNCS
    3577, 2005.
  • Pearl, 88 Pearl, J. Probabilistic Reasoning in
    Intelligent Systems Networks of Plausible
    Inference, Morgan Kaufmann, San Francisco, 1988.
  • Rehák et al., 05 M. Rehák and M. Pechoucek and
    P. Benda and L. Folt?n, Trust in Coalition
    Environment Fuzzy Number Approach, Proceedings
    of the Workshop on "Trust in Agent Societies" at
    Autonomous Agents and Multi-Agent Systems
    (AAMAS'05), p. 132-144, 2005, C. Castelfranchi
    and S. Barber and J. Sabater and M. P. Singh
    (eds) Utrecht, The Netherlands, July.
  • Sabater, 04 Evaluating the ReGreT system
    Applied Artificial Intelligence ,18 (9-10)
    797-813.
  • Sabater Sierra, 05 Review on computational
    trust and reputation models Artificial
    Intelligence Review ,24 (1) 33-60.

138
References
  • Sabater-Mir Paolucci, 06 Repage REPutation
    and imAGE among limited autonomous partners,
    JASSS - Journal of Artificial Societies and
    Social Simulation ,9 (2), 2006.
  • Schillo Funk, 99 M. Schillo and P. Funk,
    Learning from and about other agents in terms of
    social metaphors, Agents Learning About From and
    With Other Agents, 1999.
  • Sen Sajja, 02 S. Sen and N. Sajja, Robustness
    of reputation-based trust Boolean case,
    Proceedings of Autonomous Agents and Multi-Agent
    Systems (AAMAS'02), p. 288-293, 2002, Bologna,
    Italy, M. Gini and T. Ishida and C. Castelfranchi
    and W. L. Johnson (eds), ACM Press, New York, NY,
    United States of America, vol.1.
  • Shapiro, 87 S. P. Shapiro, The social control
    of impersonal trust, American Journal of
    Sociology, 1987, vol. 93, p. 623-658.
  • Steiner, 03 D. Steiner, Survey How do Users
    Feel About eBay's Feedback System? January, 2003,
    http//www.auctionbytes.com/cab/abu/y203/m01/abu00
    87/s02 .
  • Zacharia et al., 99 G. Zacharia and A. Moukas
    and P. Maes, Collaborative Reputation Mechanisms
    in Electronic Marketplaces, Proceedings of the
    Hawaii International Conference on System
    Sciences (HICSS-32), vol. 08, 1999, p. 8026, IEEE
    Computer Society, Washington, DC, United States
    of America.
Write a Comment
User Comments (0)
About PowerShow.com