Introduction to Kalman filtering - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

Introduction to Kalman filtering

Description:

Serge P. Hoogendoorn & Hans van Lint. Transport & Planning Department ... Application of Kalman filters to training ANN. Hands-on experience by exercises applied to ... – PowerPoint PPT presentation

Number of Views:2630
Avg rating:3.0/5.0
Slides: 52
Provided by: robo157
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Kalman filtering


1
An Introduction to Kalman Filtering
  • Serge P. Hoogendoorn Hans van Lint
  • Transport Planning Department
  • Delft University of Technology

Rudolf E. Kalman
2
Scope of course
  • Introduction to Kalman filters
  • Application of Kalman filters to training ANN
  • Hands-on experience by exercises applied toyour
    own problems or a problem we provide
  • Book review format
  • Each week one chapter will be discussed byone of
    the course participants
  • Take care both the book / lecture notes have a
    very loose notational convention!
  • Important information resource
  • http//en.wikipedia.org/wiki/Kalman_filtering

3
Schedule
4
Kalman filter motivation
  • Modeling approaches
  • Mathematical-physical models
  • Black-box / time-series
  • Main advantage of time-series modeling is ease
    with which on-line estimation is achieved
  • Main disadvantage is the fact that many known
    relations describing the systems behavior are not
    used
  • Kalman filtering nice way to combine advantages
    of mathematical physical models and time-series
    models

5
Lord Kalman
  • First publication on Kalman filteringin 1960 by
    R.E. Kalman
  • Combine mathematical models with measurement
    information in a way that is very efficient and
    elegant
  • First important application navigation of
    space-ships
  • Data from radar, gyroscopes, visual observations
  • Knowledge regarding dynamic behavior of the
    space-ship
  • Many other applications since then

6
Mathematical Physical model
  • xk state (vector) that completely describes the
    situation in the system at time k
  • Simulating systems behavior yields new state
    xk1

Model
kk-1
7
Car-following example
  • Simple car-following model
  • The speed V(t) of leader is given (and exogenous)
  • Discretized form

8
Car-following example
  • Now, let the state xk be defined by
  • Then we can write the discretized car following
    model in the following form

9
State-space modeling
  • Useful to determine mathematical properties of
    system (stability, controllability,
    observability)
  • Used in optimal control
  • Exercise 1
  • Consider a dynamical process from your own
    research field and formulate it as a (discrete /
    discretized) state-space description

10
Combining model Kalman filter
  • Express the accuracy of the input / model by
    adding random noise w

Model
kk-1
11
Combining model Kalman filter2
  • Introduction of the Kalman filter

Model
Filter
kk-1
12
Use of the Kalman filter
  • Main objectives of Kalman filter
  • Measurement information is used to find and
    eliminate modeling errors, errors in the input
    and errors in the parameters
  • Model information is used to eliminate outliers
    in the measurements
  • For the concept of filtering, the following are
    important
  • Filtering aim to reconstruct the state at time
    k, using all available information until that
    time k
  • Predicting based on the measurements until time
    k we forecast the system for times lgtk
  • Smoothing reconstruct the state for times l,
    given measurements at times kgtl

13
Combining model Kalman filter3
  • Kalman filter is used to based on available
    measurements to correct the model predictions
    in the most efficient way
  • This allows to correct the input quantities, the
    model parameters and the prediction output
  • Example for unknown quantity x we have the
    following model and have measurements z
  • Choose as estimator

14
Estimator variance
  • We find
  • Note for k0 (no weight to measurement)
  • For k 1 (no weight to model)
  • How to use information from both model and
    measurements in an optimal way?

15
Minimum variance estimator
  • Minimum variance estimator

16
Kalman filter
  • Basics of the general Kalman filter are the same
  • Multi-dimensional generalization based on
    discrete-time dynamic system and measurement
    equation
  • Minimum variance estimator
  • Filter will be derived in ensuing of this
    introduction

17
Discrete Kalman filter
  • We consider linear stochastic discrete time
    systems
  • Model and measurement noise are assumed Gaussian
    and mutually independent the following holds

18
Examples of state-space formulation
  • Consider AR(1) model
  • Now consider AR(2) model
  • AR(2) model is not in the correct form
  • How to rewrite AR(2) model?

19
Examples of state-space formulation
  • Consider AR(1) model
  • Now consider AR(2) model
  • Define new statethen

20
Discrete Kalman filter
  • Initial conditions for the state x0
  • Note that the time step between two times k and
    k1 need not be constant!

21
Discrete Kalman filter
  • Aim of the DKF (discrete Kalman Filter) is find
    an estimator for the state that optimally used
    the model and measurement information
  • Maximum a-posteriori estimator
  • Conditional mean estimator
  • Minimum variance estimator

22
Discrete Kalman filter
  • Important property if
  • is Gaussian, then all of the above estimators
    are identical

23
Example
  • Reconsider our model
  • with
  • Then obviously we have

24
Example
  • The conditional p.d.f. for x given z equals
  • Some tedious computation leaves us with

25
Example
  • The maximum a-posteriori estimate in case of a
    Gaussian estimate is equal to the mean
  • Note that this is equal to the minimum variance
    estimator we derived earlier
  • Note that we did not make any prior assumptions
    regarding the linear structure of the estimator
  • For general discrete linear processes with
    Gaussian noise
  • Optimal filter is linear
  • Kalman filter is optimal in the sense of minimum
    variance

26
The Kalman Filter
  • Multi-dimensional generalization of scalar
    results
  • Let the minimum variance estimator of xk given
    measurements z1,,zl be denoted by
  • And let Pkl denote the covariance matrix of this
    matrix
  • For the initial conditions, we have

27
The Kalman Filter
  • Time-propagation (prediction)
  • Measurement adaptation (correction)
  • Kalman gain

28
The Kalman Filter
  • Note the clear predictor corrector structure
  • The covariance Pkk-1 and Pkk and filter-gain
    are not dependent on the measurements zk and can
    thus be computed off-line (drawbacks?)
  • The filter produces besides the optimal estimate,
    the covariance matrix Pkk which is an important
    estimate for the accuracy of the estimate
  • The matrix also shows the effect of changing the
    measurement settings on the accuracy of the
    estimates!

29
Application example
  • Consider a bicycle equipped with GPS driving with
    10 km/h (appox. constant speed) , due to wind,
    grades, etc.
  • Some variation in the speeds are present
  • Per hour, we want to determine speed and location
    of bicycle
  • We have the following initial conditions

30
Application example
  • Assuming disturbances on the speed yields
    decision to apply noise on the speed dynamics
  • For the position we have
  • State dynamics with xk dk sk

31
Application example
  • Assume that we have data on the vehicle positions
  • with a measurement error R 2 km2
  • If we assume that we have the observations z1 9
    km, z2 19.5 km, z3 29 km, which can be used
    to estimate the location and the speed
  • What are the results of application of a Kalman
    filter?
  • Exercise code the Kalman filter in Matlab

32
Application example
  • Time k 1
  • Time k 2

33
Application example
  • Time k 3
  • Note that the speed can only be estimated
    correctly when two positions at two different
    time-steps are known
  • This becomes apparent from the covariances P

34
Example 2
  • Consider a model where only the measurements are
    subject to noise
  • Let P0 be the variance of the initial estimate
  • Determine the covariance matrices Pkk-1 and Pkk

35
Example 2
  • Consider a model where only the measurements are
    subject to noise
  • Let P0 be the variance of the initial estimate
  • Determine the covariance matrices Pkk-1 and Pkk
  • We can prove that

36
Example 2
  • Note that in this case
  • For the filter gain, we have
  • Since
  • we see that the influence of the measurements
    will be zero since Kk 0

37
Example 2
  • What can you say about the filter behavior when
    there is no information about the initial
    conditions?

38
Example 2
  • What can you say about the filter behavior when
    there is no information about the initial
    conditions?
  • In this case, we have
  • Which in turn implies that
  • This yields

39
Proof of the Kalman theorem
  • Let us consider the mean value of the state xk1
  • This describes the conditional mean estimator
    (equal to the minimum variance estimator, and the
    maximum a posteriori estimator)

40
Proof of the Kalman theorem
  • For the covariance matrix we have

41
Proof of the Kalman theorem
  • Important lemma (see notes)
  • Remainder of proof by complete induction

42
Stationary filter
  • Consider the time-independent discrete time
    model
  • Recall that
  • and thus
  • Let P?
  • then we have

43
Bicycle example
  • Recall GPS equipped bicycle
  • For k very large, we get

44
Filter divergence
  • What to do if the model is not a good description
    of reality?
  • In this case, the estimations of the filter can
    be much worse than one would suspect based on the
    covariance matrix (note that the covariance
    matrix is not dependent on the actual
    measurements)
  • If the model is incorrect then both the
    covariance matrix and the filter gain are
    incorrect the filter thinks the prior estimates
    are accurate and will not weight current
    measurements adequately (recall earlier example)
  • This phenomenon is referred to as
    filter-divergence

45
Filter divergence
  • Different ways to analyze filter divergence
  • Simple approach is based on analyzing the
    residuals
  • Theoretically, we have
  • and

46
Filter divergence
  • Since we know the theoretical statistics of the
    residuals, we can compare these to the measured
    statistics of the residuals
  • If these are alike, then the filter will function
    properly
  • If not, we have a case of filter divergence

47
Oppressing filter divergence
  • Two straightforward approaches (for details, see
    literature)
  • Increase the model noise Qk, possibly each time
    step (adaptive filtering)
  • Increase the weights of recent measurements
    (reduction of the measurement covariance Rk)

48
Extended Kalman filter
  • Consider the following non-linear system
  • Assume that we can somehow determine a reference
    trajectory
  • Then
  • where

49
Extended Kalman filter
  • For the measurement equation, we have
  • We can then apply the standard Kalman filter to
    the linearized model
  • How to choose the reference trajectory?
  • Idea of the extended Kalman filter is to
    re-linearize the model around the most recent
    state estimate, i.e.

50
The Extended Kalman Filter
  • Time-propagation (prediction)
  • Measurement adaptation (correction)
  • Kalman gain

51
Next week
  • Hans will discuss application to Kalman filters
    to adaptive parameter identification (in
    particular for ANN)
  • Exercise with state-space modeling (exercise 1),
    implementing a simple filter in Matlab (exercise
    2) and setting up an extended Kalman filter
Write a Comment
User Comments (0)
About PowerShow.com