A Probabilistic Approach to Inference with Limited Information in Sensor Networks - PowerPoint PPT Presentation

About This Presentation
Title:

A Probabilistic Approach to Inference with Limited Information in Sensor Networks

Description:

... (x) but that it is possible to 'walk around' in X using only local state transitions. Insight: we can use a 'random walk' to help us draw random samples from p ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 42
Provided by: rahulb8
Category:

less

Transcript and Presenter's Notes

Title: A Probabilistic Approach to Inference with Limited Information in Sensor Networks


1
A Probabilistic Approach to Inference with
Limited Information in Sensor Networks
  • Rahul BiswasSebastian ThrunLeonidas Guibas
  • Stanford University

Modified and presented by Kyoungwon Suh
2
Outline
  • Short tutorial MCMC Metropolis algorithm
  • Motivation for this work
  • Problem definition
  • Bayesian network-based Model
  • Probabilistic solution based on MCMC
  • Future work and Conclusion

3
Markov chain Monte Carlo (MCMC)
  • The set X and the distribution p(x) we wish to
    sample from
  • Key Idea
  • Suppose it is hard to sample p(x).
  • If we have MC with its stationary Distr p(x),
    then use the Markov chain to do a random walk to
    draw random samples from p(x)
  • ltQgt How to construct such MC?
  • ltA1gt MCMC Metropolis algorithm

4
Stationary distribution
  • Consider the Markov chain given
  • The stationary distribution is
  • Some samples

Empirical Distribution
1,1,2,3,2,1,2,3,3,2 1,2,2,1,1,2,3,3,3,3 1,1,1,2,3,
2,2,1,1,1 1,2,3,3,3,2,1,2,2,3 1,1,2,2,2,3,3,2,1,1
1,2,2,2,3,3,3,2,2,2
5
Metropolis algorithm
  • How to pick a suitable Markov chain for our
    distribution?
  • We define a Markov chain with the following
    process
  • Sample a candidate point x(t1) from a proposal
    distribution q(x(t1) x(t)) which is symmetric
    q(xy)q(yx)
  • Compute the importance ratio (Usually, it it easy
    to obtain!)
  • With probability min(r,1) transition to x(t1) ,
    otherwise stay in the same state

6
Metropolis intuition
  • Why does the Metropolis algorithm work?
  • Proposal distribution can propose anything it
    likes (as long as it can jump back with the same
    probability)
  • Proposal is always accepted if its jumping to a
    more likely state
  • Proposal accepted with the importance ratio if
    its jumping to a less likely state
  • The acceptance policy, combined with the
    reversibility of the proposal distribution, makes
    sure that the algorithm explores states in
    proportion to p(x)!

7
Traditional Algorithm Design
output
8
In a Perfect World
output
sensor data
9
In the Real World
sensor data
sensor data may be noisy, missing, and expensive
10
What to Do?
probabilistic output
sensor data
Our Algorithm
11
Scenario
  • Friendly agent at known location LF

12
Sensors
  • N sensors at known locations LS1LSN

13
Enemy Agents
  • M enemies at unknown locations LE1LEM

14
Query
  • Is LF in convex hull of LE1LEM?

15
Sensor Model
  • Sensors stochasticallydetect presence ofenemy
    agents

ideal observation
sensor observation
16
Problems in Answering Query
  • May not be sensor near enemy agent
  • Sensor near agent may not have sensed
  • Sensor may malfunction
  • may report enemy when none is present
  • may not report enemy when one is present

17
Our Approach
  • Capture relations in a Bayesian network
  • Recast query as a posterior estimate

observed
sensor measurements
enemy agent locations
query
unobserved
D
18
Sensor Measurement Variables
  • Ri is known to us
  • Binary variable either yes or no

observed
sensor measurements
enemy agent locations
query
unobserved
D
19
Enemy Agent Location Variables
  • LEi is not known
  • Continuous variables

observed
sensor measurements
enemy agent locations
query
unobserved
D
20
Query Variable
  • D is not known
  • Binary variable either yes or no

observed
sensor measurements
enemy agent locations
query
unobserved
D
21
Prior Probability P(LEi)
  • Uniform distribution across region

sensor measurements
enemy agent locations
query
D
22
Conditional Probability P(RjLE)
  • As specified by sensor model

ideal observation
sensor observation
sensor measurements
enemy agent locations
23
Conditional Probability P(DLE)
  • Let CH1 CHS be the convex hull of LE
  • D is true iff

counterclockwise
clockwise
enemy agent locations
query
D
24
Query Reformulation
  • Want to estimate P(DR)
  • Produces probabilistic answer
  • D can be replaced by any query
  • Extent and Area of Point Set
  • Maximally Distant Point

25
MCMC for Inference
  • MCMC Markov Chain Monte Carlo
  • Approximate inference algorithm
  • Allows us to feed in the R variables
  • Generates a probability for D
  • Uses traditional convex hull methods

26
How MCMC Works
  • Draw T samples LE1LET from P(LER)
  • How to generate samples from P(LER)?

27
How MCMC Works (continued)
  • Draw samples from Markov Chain
  • Desired stationary distribution P(LER)
  • Start with random LE11 LE1T
  • At each step, move LEti to new location
  • If more likely, accept
  • min(1, P(LEt1,R1,..Rp)/ P(LEt,R1,..Rp))1
  • If less likely, sometimes accept
  • 0 lt min(1, P(LEt1,R1,..Rp)/ P(LEt,R1,..Rp)) lt 1

28
Experimental Results
enemy agents (five)
sensors
friendly agent
low
high
29
Experimental Results
  • P(DR)
  • 51
  • no sensorshave sensed





low
high
30
Experimental Results
  • P(DR)
  • 44
  • not useful region

low
high
31
Experimental Results
  • P(DR)
  • 81
  • useful region

low
high
32
Experimental Results
  • P(DR)
  • 80
  • not important

low
high
33
Experimental Results
  • P(DR)
  • 86
  • near edge

low
high
34
Experimental Results
  • P(DR)
  • 93
  • negative information

low
high
35
Usefulness of More Sensors
36
Future Work
  • Intelligent Sensor Selection
  • Actively choose sensors that will sense
  • Possible to predict entropy reduction
  • Distributed MCMC
  • Run on Crossbow MICA2 Motes

37
Conclusion
  • Presented an interface between
  • noisy, incomplete sensor data
  • existing algorithms
  • Used probabilistic model to analyze data
  • Approximate inference via MCMC

38
Acknowledgment
  • Slides are borrowed from
  • Teg Grenagers talk on MCMC (stanford)
  • Rahul Biswas IPSN talk (stanford)

39
  • Backup slides .

40
Markov chain Monte Carlo (MCMC)
  • The set X and the distribution p(x) we wish to
    sample from
  • Suppose that it is hard to sample p(x) but that
    it is possible to walk around in X using only
    local state transitions
  • Insight we can use a random walk to help us
    draw random samples from p(x)

41
Markov Chains for sampling
  • Markov chain on a space X with transitions T is a
    random process (x(0), x(1),x(t),) that satisfy
  • In order for a Markov chain to be useful for
    sampling p(x), the stationary distribution of the
    Markov chain must be p(x)
  • If this is the case, we can start in an arbitrary
    state, use the Markov chain to do a random walk
    for a while, and stop and output the current
    state x(t)
  • The resulting state will be sampled from p(x)!
Write a Comment
User Comments (0)
About PowerShow.com