Eliciting%20Honest%20Feedback - PowerPoint PPT Presentation

About This Presentation
Title:

Eliciting%20Honest%20Feedback

Description:

Eliciting Honest Feedback Eliciting Honest Feedback: The Peer-Prediction Model (Miller, Resnick, Zeckhauser) Minimum Payments that Reward Honest Feedback (Jurca ... – PowerPoint PPT presentation

Number of Views:83
Avg rating:3.0/5.0
Slides: 38
Provided by: Nikh52
Category:

less

Transcript and Presenter's Notes

Title: Eliciting%20Honest%20Feedback


1
Eliciting Honest Feedback
  1. Eliciting Honest Feedback The Peer-Prediction
    Model (Miller, Resnick, Zeckhauser)
  1. Minimum Payments that Reward Honest Feedback
    (Jurca, Faltings)

Nikhil Srivastava
Hao-Yuh Su
2
Eliciting Feedback
  • Fundamental purpose of reputation systems
  • Review general setup

3
Eliciting Feedback
  • Fundamental purpose of reputation systems
  • Review general setup
  • Information distributed among individuals about
    value of some item
  • external goods (NetFlix, Amazon, ePinions,
    admissions)
  • each other (eBay, PageRank)
  • Aggregated information valuable for individual or
    group decisions
  • How to gather and disseminate information?

4
Challenges
  • Underprovision
  • inconvenience cost of contributing
  • Dishonesty
  • niceness, fear of retaliation
  • conflicting motivations

5
Challenges
  • Underprovision
  • inconvenience cost of contributing
  • Dishonesty
  • niceness, fear of retaliation
  • conflicting motivations
  • Reward systems
  • motivate participation, honest feedback
  • monetary (prestige, privilege, pure competition)

6
Overcoming Dishonesty
  • Need to distinguish good from bad reports
  • explicit reward systems require objective
    outcome, public knowledge
  • stock, weather forecasting
  • But what if
  • subjective? (product quality/taste)
  • private? (breakdown frequency, seller
    reputability)

7
Overcoming Dishonesty
  • Need to distinguish good from bad reports
  • explicit reward systems require objective
    outcome, public knowledge
  • stock, weather forecasting
  • But what if
  • subjective? (product quality/taste)
  • private? (breakdown frequency, seller
    reputability)
  • Naive solution reward peer agreement
  • Information cascade, herding

8
Peer Prediction
  • Basic idea
  • reports determine probability distribution on
    other reports
  • reward based on predictive power of users
    report for a reference raters report
  • taking advantage of proper scoring rules, honest
    reporting is a Nash equilibrium

9
Outline
  • Peer Prediction method
  • model, assumptions
  • result
  • underlying intuition through example
  • Extensions/Applications
  • primary practical concerns
  • (weak) assumptions sequential reporting,
    continuous signals, risk aversion
  • (Strong) assumptions
  • Other limitations

10
Information Flow - Model
announcement a
PRODUCT type t
CENTER ?(a)
signal S
transfer ?
11
Information Flow - Model
announcement a
PRODUCT type t
CENTER ?(a)
signal S
transfer ?
PRODUCT type t
CENTER ?(a)
signal S
announcement a
transfer ?
12
Information Flow - Example
h h l
PLUMBER type H, L
signal h (high), l (low)
h h l
1 1 0
h h l
?(a) agreement
13
Assumptions - Model
PRODUCT type t 1, , T
f (s t)
  • common prior distribution p(t)
  • common knowledge distribution f(st)
  • linear utility

- stochastic relevance
- fixed type - finite T
14
Stochastic Relevance
  • Informally
  • same product, so signals dependent
  • certain observation (realization) should change
    posterior on type p(t), and thus on signal
    distribution f(s t)
  • Rolex v. Faux-lex
  • generically satisfied if different types yield
    different signal distributions

15
Stochastic Relevance
  • Informally
  • same product, so signals dependent
  • certain observation (realization) should change
    posterior on type p(t), and thus on signal
    distribution f(s)
  • Rolex v. Faux-lex
  • generically satisfied if different types yield
    different signal distributions
  • Formally
  • Si stochastically relevant for Sj iff
  • distribution (Si Sj) different for different
    realizations of Sj
  • there is sj such that

16
Assumptions - Example
  • finite T plumber is either H or L
  • fixed type plumber quality does not change
  • common prior p(t)
  • p(H) p(L) .5
  • stochastic relevance need good plumbers signal
    distribution to be different than bads
  • common knowledge f(st)
  • p(h H) .85, p(h L) .45
  • note this gives p(h), p(l)

17
Definitions - Model
?(a)
  • T types, M signals, N raters
  • signals S (S1, , SN), where Si s1, , sM
  • announcements a (a1, , aN), where ai s1,
    , sM
  • transfers ?(a) (?1 (a), , ?N(a))
  • announcement strategy for player i ai (ai1,
    aiM)

18
Definitions - Example
h h l
h h l
T(a)
1 1 0
PLUMBER type H, L
signal h, l
  • 2 types, 2 signals, 3 raters
  • signals S (h, h, l)
  • announcements a (h, h, l)
  • transfers ?(a) (?1 (a), ?2(a), ?3(a))
  • announcement strategy for player 2 a2 (a2h,
    a2l)
  • total set of strategies (h, h), (h, l), (l, h),
    (l, l)

19
Best Responses - Model
T(a)
  • Each player decides announcement strategy ai
  • ai is a best-response to other strategies a-i if
  • Best-response strategy maximizes raters expected
    transfer with respect to other raters signals
    conditional on Si sm
  • Nash equilibrium if equation holds for all i

20
Best Responses - Example
t1(a1, a2) t2 (a1, a2)
PLUMBER
h or l a2
T(a)
S1 h S2 ?
  • Player 1 receives signal h
  • Player 2s strategy is to report a2
  • Player 1 reporting signal h is a best-response if

21
Peer Prediction
  • Find reward mechanism that induces honest
    reporting
  • where ai Si for all i is a Nash equilibrium
  • Will need Proper Scoring Rules

22
Proper Scoring Rules
  • Definition
  • for two variables Si and Sj, a scoring rule
    assigns to each announcement ai of Si a score for
    each realization of Sj
  • R ( sj ai )
  • proper if score maximized by the announcement of
    the true realization of Si

23
Applying Scoring Rules
  • Before predictive markets (Hanson)
  • Si Sj reality, ai agent report
  • R ( reality report )
  • Proper scoring rules ensure honest reports ai
    Si
  • stochastic relevance for private info and public
    signal
  • automatically satisfied
  • What if theres no public signal?

24
Applying Scoring Rules
  • Before predictive markets (Hanson)
  • Si Sj reality, ai agent report
  • R ( reality report )
  • Proper scoring rules ensure honest reports ai
    Si
  • stochastic relevance for private info and public
    signal
  • automatically satisfied
  • What if theres no public signal? Use other
    reports
  • Now predictive peers
  • Si my signal, Sj your signal, ai my report
  • R ( your report my report )

25
How it Works
  • For each rater i, we choose a different reference
    rater r(i)
  • Rater i is rewarded for predicting rater r(i)s
    announcement
  • ?i (ai , ar(i) ) R( ar(i), ai )
  • based on updated beliefs about r(i)s
    announcement given is announcement
  • Proposition for any strictly proper scoring rule
    R, a reward system with transfers ?i makes
    truthful reporting a strict Nash equilibrium

26
Proof of Proposition
  • If player i receives signal s, he seeks to
    maximize his expected transfer
  • Since player r(i) is honestly reporting, his
    report a equals his signal s
  • since R is proper, score is uniquely maximized by
    announcement of true realization of Si
  • uniquely maximized for ai s
  • Si is stochastically relevant for Sr(i), and
    since r(i) reports honestly Si is stochastically
    relevant for ar(i) sr(i)

27
Peer Prediction Example
PLUMBER p(H) p(L) .5
a1 h, l a2 s2
t1(a1, a2)
T(a)
S1 l S2 ?
p(h H) .85 p(h L) .45
  • Player 1 observes low and must decide a1 h, l
  • Assume logarithmic scoring
  • t1(a1, a2) R(a2 a1) ln p(a2 a1 )
  • What signal maximizes expected payoff?
  • Note that peer agreement would incentivize
    dishonesty (h)

28
Peer Prediction Example
PLUMBER p(H) p(L) .5
a1 h, l a2 s2
t1(a1, a2)
T(a)
S1 l S2 ?
p(h H) .85 p(h L) .45
  • Player 1 observes low and must decide a1 h, l
  • Assume logarithmic scoring
  • t1(a1, a2) R(a2 a1) ln p(a2 a1 )
  • a1 l (honest) yields expected transfer -.69
  • a1 h (false) yields expected transfer -.75

29
Things to Note
  • Players dont have to perform complicated
    Bayesian reasoning if they
  • trust the center to accurately compute posteriors
  • believe other players will report honestly
  • Not unique equilibrium
  • collusion

30
Primary Practical Concerns
  • Examples
  • inducing effort fixed cost c gt 0 of reporting
  • better information users seek multiple samples
  • participation constraints
  • budget balancing

31
Primary Practical Concerns
  • Examples
  • inducing effort fixed cost c gt 0 of reporting
  • better information users seek multiple samples
  • participation constraints
  • budget balancing
  • Basic idea
  • affine rescaling (ax b) to overcome obstacle
  • preserves honesty incentive
  • increases budget see 2nd paper

32
Extensions to Model
  • Sequential reporting
  • allows immediate use of reports
  • let rater i predict report of rater i 1
  • scoring rule must reflect changed beliefs about
    product type due to (1, , i - 1) reports

33
Extensions to Model
  • Sequential reporting
  • allows immediate use of reports
  • let rater i predict report of rater i 1
  • scoring rule must reflect changed beliefs about
    product type due to (1, , i - 1) reports
  • Continuous signals
  • analytic comparison of three common rules
  • eliciting coarse reports from exact information
  • for two types, raters will choose closest bin
    (complicated)

34
Common Prior Assumption
  • Practical concern - how do we know p(t),?
  • needed by center to compute payments
  • needed by raters to compute posteriors
  • define types with respect to past products,
    signals
  • types t 1,,9 where f(h t) t/10
  • for new product, p(t) based on past product
    signals
  • update beliefs with new signals
  • note f(s t) given automatically

35
Common Prior Assumption
  • Practical concern - how do we know p(t)?
  • Theoretical concern - are p(t), f(st) public?
  • raters trust center to compute appropriate
    posterior distributions for reference raters
    signal
  • rater with private information has no guarantee
  • center will not report true posterior beliefs
  • rater might skew report to reflect appropriate
    posteriors
  • report both private information and announcement
  • two scoring mechanisms, one for distribution
    implied by private priors, another for
    distribution implied by announcement

36
Limitations
  • Collusion
  • could a subset of raters gain higher transfers?
    higher balanced transfers?
  • can such strategies
  • overcome random pairings
  • avoid suspicious patterns
  • Multidimensional signals
  • economist with knowledge of computer science
  • Understanding/trust in the system
  • complicated Bayesian reasoning, payoff rules
  • rely on experts to ensure public confidence

37
Discussion
  • Is the common priors assumption reasonable?
  • How might we relax it and keep some positive
    results?
  • What are the most serious challenges to
    implementation?
  • Can you envision a(n online) system that rewards
    feedback?
  • How would the dynamics differ from a reward-less
    system?
  • Is this paper necessary at all?
  • Predominance of honesty from fairness incentive

cost of reporting, coarse reports, common
priors, collusion, multidimensional signals,
trust in system
Write a Comment
User Comments (0)
About PowerShow.com