Title: A Probabilistic Approach to Inference with Limited Information in Sensor Networks
1A Probabilistic Approach to Inference with
Limited Information in Sensor Networks
- Rahul BiswasSebastian ThrunLeonidas Guibas
- Stanford University
Modified and presented by Kyoungwon Suh
2Outline
- Short tutorial MCMC Metropolis algorithm
- Motivation for this work
- Problem definition
- Bayesian network-based Model
- Probabilistic solution based on MCMC
- Future work and Conclusion
3Markov chain Monte Carlo (MCMC)
- The set X and the distribution p(x) we wish to
sample from - Key Idea
- Suppose it is hard to sample p(x).
- If we have MC with its stationary Distr p(x),
then use the Markov chain to do a random walk to
draw random samples from p(x) - ltQgt How to construct such MC?
- ltA1gt MCMC Metropolis algorithm
4Stationary distribution
- Consider the Markov chain given
- The stationary distribution is
- Some samples
Empirical Distribution
1,1,2,3,2,1,2,3,3,2 1,2,2,1,1,2,3,3,3,3 1,1,1,2,3,
2,2,1,1,1 1,2,3,3,3,2,1,2,2,3 1,1,2,2,2,3,3,2,1,1
1,2,2,2,3,3,3,2,2,2
5Metropolis algorithm
- How to pick a suitable Markov chain for our
distribution? - We define a Markov chain with the following
process - Sample a candidate point x(t1) from a proposal
distribution q(x(t1) x(t)) which is symmetric
q(xy)q(yx) - Compute the importance ratio (Usually, it it easy
to obtain!) - With probability min(r,1) transition to x(t1) ,
otherwise stay in the same state
6Metropolis intuition
- Why does the Metropolis algorithm work?
- Proposal distribution can propose anything it
likes (as long as it can jump back with the same
probability) - Proposal is always accepted if its jumping to a
more likely state - Proposal accepted with the importance ratio if
its jumping to a less likely state - The acceptance policy, combined with the
reversibility of the proposal distribution, makes
sure that the algorithm explores states in
proportion to p(x)!
7Traditional Algorithm Design
output
8In a Perfect World
output
sensor data
9In the Real World
sensor data
sensor data may be noisy, missing, and expensive
10What to Do?
probabilistic output
sensor data
Our Algorithm
11Scenario
- Friendly agent at known location LF
12Sensors
- N sensors at known locations LS1LSN
13Enemy Agents
- M enemies at unknown locations LE1LEM
14Query
- Is LF in convex hull of LE1LEM?
15Sensor Model
- Sensors stochasticallydetect presence ofenemy
agents
ideal observation
sensor observation
16Problems in Answering Query
- May not be sensor near enemy agent
- Sensor near agent may not have sensed
- Sensor may malfunction
- may report enemy when none is present
- may not report enemy when one is present
17Our Approach
- Capture relations in a Bayesian network
- Recast query as a posterior estimate
observed
sensor measurements
enemy agent locations
query
unobserved
D
18Sensor Measurement Variables
- Ri is known to us
- Binary variable either yes or no
observed
sensor measurements
enemy agent locations
query
unobserved
D
19Enemy Agent Location Variables
- LEi is not known
- Continuous variables
observed
sensor measurements
enemy agent locations
query
unobserved
D
20Query Variable
- D is not known
- Binary variable either yes or no
observed
sensor measurements
enemy agent locations
query
unobserved
D
21Prior Probability P(LEi)
- Uniform distribution across region
sensor measurements
enemy agent locations
query
D
22Conditional Probability P(RjLE)
- As specified by sensor model
ideal observation
sensor observation
sensor measurements
enemy agent locations
23Conditional Probability P(DLE)
- Let CH1 CHS be the convex hull of LE
- D is true iff
counterclockwise
clockwise
enemy agent locations
query
D
24Query Reformulation
- Want to estimate P(DR)
- Produces probabilistic answer
- D can be replaced by any query
- Extent and Area of Point Set
- Maximally Distant Point
25MCMC for Inference
- MCMC Markov Chain Monte Carlo
- Approximate inference algorithm
- Allows us to feed in the R variables
- Generates a probability for D
- Uses traditional convex hull methods
26How MCMC Works
- Draw T samples LE1LET from P(LER)
-
- How to generate samples from P(LER)?
27How MCMC Works (continued)
- Draw samples from Markov Chain
- Desired stationary distribution P(LER)
- Start with random LE11 LE1T
- At each step, move LEti to new location
- If more likely, accept
- min(1, P(LEt1,R1,..Rp)/ P(LEt,R1,..Rp))1
- If less likely, sometimes accept
- 0 lt min(1, P(LEt1,R1,..Rp)/ P(LEt,R1,..Rp)) lt 1
28Experimental Results
enemy agents (five)
sensors
friendly agent
low
high
29Experimental Results
- P(DR)
- 51
- no sensorshave sensed
low
high
30Experimental Results
- P(DR)
- 44
- not useful region
low
high
31Experimental Results
low
high
32Experimental Results
low
high
33Experimental Results
low
high
34Experimental Results
- P(DR)
- 93
- negative information
low
high
35Usefulness of More Sensors
36Future Work
- Intelligent Sensor Selection
- Actively choose sensors that will sense
- Possible to predict entropy reduction
- Distributed MCMC
- Run on Crossbow MICA2 Motes
37Conclusion
- Presented an interface between
- noisy, incomplete sensor data
- existing algorithms
- Used probabilistic model to analyze data
- Approximate inference via MCMC
38Acknowledgment
- Slides are borrowed from
- Teg Grenagers talk on MCMC (stanford)
- Rahul Biswas IPSN talk (stanford)
39 40Markov chain Monte Carlo (MCMC)
- The set X and the distribution p(x) we wish to
sample from - Suppose that it is hard to sample p(x) but that
it is possible to walk around in X using only
local state transitions - Insight we can use a random walk to help us
draw random samples from p(x)
41Markov Chains for sampling
- Markov chain on a space X with transitions T is a
random process (x(0), x(1),x(t),) that satisfy - In order for a Markov chain to be useful for
sampling p(x), the stationary distribution of the
Markov chain must be p(x) - If this is the case, we can start in an arbitrary
state, use the Markov chain to do a random walk
for a while, and stop and output the current
state x(t) - The resulting state will be sampled from p(x)!