From Consensus to Social Learning in Complex Networks - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

From Consensus to Social Learning in Complex Networks

Description:

From Consensus to Social Learning in Complex Networks Ali Jadbabaie Skirkanich Associate Professor of innovation Electrical & Systems Engineering – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 31
Provided by: vijay5
Category:

less

Transcript and Presenter's Notes

Title: From Consensus to Social Learning in Complex Networks


1
From Consensus to Social Learning in Complex
Networks
Ali Jadbabaie Skirkanich Associate Professor of
innovation Electrical Systems Engineering and
GRASP Laboratory University of Pennsylvania
With Alireza Tahbaz-Salehi and Victor Preciado
First Year Review, August 27, 2009
2
http//www.cis.upenn.edu/ngns
Lab Experiments
Field Exercises
Theory
Data Analysis
Numerical Experiments
Real-World Operations
  • First principles
  • Rigorous math
  • Algorithms
  • Proofs
  • Correct statistics
  • Only as good as underlying data
  • Simulation
  • Synthetic, clean data
  • Stylized
  • Controlled
  • Clean, real-world data
  • Semi-Controlled
  • Messy, real-world data
  • Unpredictable
  • After action reports in lieu of data

3
Good news Spectacular progress
  • Consensus and information aggregation
  • Random spectral graph theory
  • synchronization, virus spreading
  • New abstractions beyond graphs
  • understanding network topology
  • simplicial homology
  • computing homology groups

4
Consensus, Flocking and Synchronization
Opinion dynamics, crowd control, synchronization
and flocking
5
Flocking and opinion dynamics
  • Bounded confidence opinion model (Krause, 2000)
  • Nodes update their opinions as a weighted average
  • of the opinion value of their friends
  • Friends are those whose opinion is already close
  • When will there be fragmentation and when will
    there be convergence of opinions?
  • Dynamics changes topology

6
Consensus in random networks
  • Consider a network with n nodes and a vector of
    initial values, x(0)
  • Consensus using a switching and directed graph
    Gn(t)
  • In each time step, Gn(t) is a realization of a
    random graph where edges appear with probability,
    Pr(aij1)p, independently of each other

Consensus dynamics
Stationary behavior
Despite its easy formulation, very little is
known about x and v
7
Random Networks
The graphs could be correlated so long as they
are stationary-ergodic.
8
What about the consensus value?
  • Random graph sequence means that consensus value
    is a random variable
  • Question What is its distribution?
  • A relatively easy case
  • Distribution is degenerate (a Dirac) if and
    only if all matrices have the same left
    eigenvector with probability 1.
  • In general
  • Where is the eigenvector associated with the
    largest eigenvalue (Perron vector)

Can we say more?
9
EWk?Wk for Erdos-Renyi graphs
Define
10
Random Consensus
  • For simplicity in our explanation, we illustrate
    the structure of EWk?Wk using the case n4

These entries have the following expressions
where q1-p and H(p,n) is a special function
that can be written in terms of a hypergeometric
function (the detailed expression is not relevant
in our exposition)
11
Variance of consensus value for Erdos-Renyi
graphs
  • Defining the parameter
  • we can finally write the left eigenvector of the
    expected Kronecker as
  • Furthermore, substituting the above eigenvector
    in our original expression for the variance (and
    simple algebraic simplifications) we deduce the
    following final expression as a function of p, n,
    and x(0)
  • where

12
Random Consensus (plots)
  • var(x) for initial conditions uniformly
    distributed in 0,1, n?3,6,9,12,15, and p
    varying in the range (0,1

What about other random graphs?
Var(x)
n3 n6 n9
n12 n15
p
13
Static Model with Prescribed Expected Degree
Distribution
  • Degree distributions are useful to the extent
    that they tell us something about the spectral
    properties (at least for distributed
    computation/optimization)
  • Generalized static models Chung and Lu, 2003
  • Random graph with a prescribed expected degree
    sequence
  • We can impose an expected degree wi on the i-th
    node

14
Eigenvalues of Chung-Lu Graph
  • Numerical Experiment Represent the histogram of
    eigenvalues for several realizations of this
    random graph
  • What is the eigenvalue distribution of the
    adjacency matrix for very large Chung-Lu random
    networks?
  • Limiting Spectral Density Analytical expression
    only possible for very particular cases.

Contribution Estimation of the shape of the
bulk for a given expected degree sequence,
(w1,,wn).
15
Spectral moments of random graphs and degree
distributions
  • Degree distributions can reveal the moments of
    the spectra of graph Laplacians
  • Determine synchronizability
  • Speed of convergence of distributed algorithms
  • Lower moments do not necessarily fix the support,
    but they fix the shape
  • Analysis of virus spreading (depends on spectral
    radius of adjacency)
  • Non-conservative synchronization conditions on
    graphs with prescribed degree distributions
  • Analytic expressions for spectral moments of
    random geometric graphs

16
Consensus and Naïve Social learning
  • When is consensus a good thing?
  • Need to make sure update converges to the
    correct value

17
Naïve vs. Bayesian
Naïve learning
just average with neighbors
Fuse info with Bayes Rule
18
Social learning
  • There is a true state of the world, among
    countably many
  • We start from a prior distribution, would like to
    update the distribution (or belief on the true
    state) with more observations
  • Ideally we use Bayes rule to do the information
    aggregation
  • Works well when there is one agent (Blackwell,
    Dubins1962), become impossible when more than 2!

19
Locally Rational, Globally Naïve Bayesian
learning under peer pressure
20
Model Description
21
Model Description
22
Belief Update Rule
23
Why this update?
24
Eventually correct forecasts
Eventually-correct estimation of the output!
25
Why strong connectivity?
  • No convergence if different people interpret
    signals differently
  • N is misled by listening to the less informed
    agent B

26
Example
One can actually learn from others
27
Learning from others
Information in ith signal only good for
distinguishing
28
Convergence of beliefs and consensus on correct
value!
29
Learning from others
30
Summary
Only one agent needs a positive prior on the true
state!
Write a Comment
User Comments (0)
About PowerShow.com