Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters

Description:

Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006 Background Optimal linear filters Wiener Stationary ... – PowerPoint PPT presentation

Number of Views:183
Avg rating:3.0/5.0
Slides: 31
Provided by: UCBerkele9
Category:

less

Transcript and Presenter's Notes

Title: Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters


1
Nonlinear and Non-Gaussian Estimation with A
Focus on Particle Filters
  • Prasanth Jeevan
  • Mary Knox
  • May 12, 2006

2
Background
  • Optimal linear filters
  • Wiener ? Stationary
  • Kalman ? Gaussian Posterior, p(xy)
  • Filters for nonlinear systems
  • Extended Kalman
  • Particle

3
Extended Kalman Filter (EKF)
  • Locally linearize the non-linear functions
  • Assume p(xky1,,k) is Gaussian

4
Particle Filter (PF)
  • Weighted point mass or particle representation
    of possibly intractable posterior probability
    density functions, p(xy)
  • Estimates recursively in time allowing for online
    calculations
  • Attempts to place particles in important regions
    of the posterior pdf
  • O(N) complexity on number of particles

5
Particle Filter Background Ristic et. al. 2004
  • Monte Carlo Estimation
  • Pick Ngtgt1 particles with distribution p(x)
  • Assumption xi is independent

6
Importance Sampling
  • Cannot sample directly from p(x)
  • Instead sample from known importance density,
    q(x), where
  • Estimate I from samples and importance weights
  • where

7
Sequential Importance Sampling (SIS)
  • Iteratively represent posterior density function
    by random samples with associated weights
  • Assumptions xk Hidden Markov process, yk
    conditionally independent given xk

8
Degeneracy
  • Variance of sample weights increases with time if
    importance density not optimal Doucet 2000
  • In a few cycles all but one particle will have
    negligible weights
  • PF will updating particles that contribute little
    in approximating the posterior
  • Neff, estimate of effective sample size Kong
    et. al. 1994

9
Optimal Importance Density Doucet et. al. 2000
  • Minimizes variance of importance weights to
    prevent degeneracy
  • Rarely possible to obtain, instead often use

10
Resampling
  • Generate new set of samples from
  • Weights are equal after i.i.d. sampling
  • O(N) complexity
  • Coupled with SIS, these are the two key
    components of a PF

11
Sample Impoverishment
  • Set of particles with low diversity
  • Particles with high weights are selected more
    often

12
Sampling Importance Resampling (SIR) Gordon
et. al. 1993
  • Importance density is the transitional prior
  • Resampling at every time step

13
SIR Pros and Cons
  • Pro importance density and weight updates are
    easy to evaluate
  • Con Observations not used when transitioning
    state to next time step

14
A Cycle of SIR
15
Auxiliary SIR - MotivationPitt and Shephard
1999
  • Want to use observation when exploring the
    state space ( s)
  • To have particles in regions of high likelihood
  • Incorporate into resampling at time k-1
  • Looking one step ahead to choose particles

16
ASIR - from SIR
  • From SIR we had
  • If we move the likelihood inside we get
  • We dont have though
  • Use , a characterization of given
  • such as

17
ASIR continued
  • So then we get
  • And the new importance weight becomes

18
ASIR Pros Cons
  • Pro
  • Can be less sensitive to peaked likelihoods and
    outliers by using observation
  • Outliers - Model-improbable states that can
    result in a dramatic loss of high-weight
    particles
  • Cons
  • Added computation per cycle
  • If is a bad characterization of
    (ie. large process noise), then resampling
    suffers, and performance can degrade

19
Simulation Linear
  • System Equations
  • where v N(0,6) and w N(0,5)

20
Simulation Linear10 Samples
21
Simulation Linear50 Samples
22
Simulation Linear
  • Table 1 Mean Squared Error Per Time Step

  Number of Particles Number of Particles Number of Particles Number of Particles
Filter 10 50 100 1000
KF 0.0349 0.0351 0.0350 0.0352
ASIR 0.7792 0.0886 0.0417 0.0350
SIR 0.9053 0.0977 0.0496 0.0354
23
Simulation Nonlinear
  • System Equations
  • where v N(0,6) and w N(0,5)

24
Simulation Nonlinear10 Samples
25
Simulation Nonlinear50 Samples
26
Simulation Nonlinear100 Samples
27
Simulation Nonlinear1000 Samples
28
Simulation Nonlinear
  • Table 2 Mean Squared Error Per Time Step

  Number of Particles Number of Particles Number of Particles Number of Particles
Filter 10 50 100 1000
EKF 812.08 826.20 827.94 838.75
ASIR 30.14 20.15 18.81 17.86
SIR 37.97 22.62 21.49 19.78
29
Conclusion
  • PF approaches KF optimal estimates as
  • N ? ?
  • PF better than EKF for nonlinear systems
  • ASIR generates better particles in certain
    conditions by incorporating the observation
  • PF is applicable to a broad class of system
    dynamics
  • Simulation approaches have their own limitations
  • Degeneracy and sample impoverishment

30
Conclusion (2)
  • Particle filters composed of SIS and resampling
  • Many variations to improve efficiency (both
    computationally and for getting better
    particles)
  • Other PFs Regularized PF, (EKF/UKF)PF, etc.
Write a Comment
User Comments (0)
About PowerShow.com