Case Study: An Information Theoretic Approach to Observer Path Design for BearingsOnly Tracking A' L - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

Case Study: An Information Theoretic Approach to Observer Path Design for BearingsOnly Tracking A' L

Description:

... Theoretic Approach to Observer Path Design for Bearings-Only Tracking ... Constraints (e.g. dynamics of observer vehicle) Find ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 16
Provided by: timoth128
Category:

less

Transcript and Presenter's Notes

Title: Case Study: An Information Theoretic Approach to Observer Path Design for BearingsOnly Tracking A' L


1
Case StudyAn Information Theoretic Approach to
Observer Path Design for Bearings-Only
Tracking(A. Logothetis, A. Isaksson, R. Evans)
  • Sensor Reading Group
  • 30 October 2003
  • Timothy Chung

2
Main contributions
  • Given an array of bearings-only sensors, the
    authors optimally solve the 2D target tracking
    problem, using cost as mutual information between
    measurements sequence and
  • (a) final observer state, xT and
  • (b) observer trajectory sequence, XT.

3
Previous work
  • Some of the previous work done
  • Didnt include process noise in the target model
  • Assumed constant measurement noise
  • i e. sensor measurements are range INdependent
  • Metric is a function of the Fisher Information
    Matrix (FIM)
  • CRLB FIM-1, which is the lowest estimation
    error
  • e.g. maximize det, trace, smallest ?

4
Modeling
  • Target
  • Statistical model of the kinematics
  • Constant velocity with zero-mean random
    accelerations
  • where

5
Modeling, cont.
  • Measurement
  • The sensor array has a preference in orientation
    relative to the target!
  • Measurement noise is given by

6
Information Theory Tools
  • Recall the definitions of entropy and mutual
    information for continuous r.v.s
  • Entropy
  • Conditional Entropy
  • Mutual Information

7
Preliminaries
  • Given a linear Gauss-Markov system
  • Notation k 2 0,T
  • Xk ( x(1), x(2), , x(k) )
  • Yk ( y(1), y(2), , y(k) )
  • Statistical properties

8
Lemma 3.1 I(x(T)YT)
  • The mutual information between the final state
    x(T) and the measurement sequence YT is given by
  • where

9
Proof of Lemma 3.1
  • First, we note that the state x(T) is a Gaussian
    f(x(T)) with mean
  • and covariance is given by the recursive
    covariance extrapolation equation

10
Proof of Lemma 3.1, cont.
  • Similarly, f(x(T)YT) is a Gaussian with mean
    (defined below) and covariance P(TT)
  • The a posteriori mean and uncertainty are given
    by Kalman filter update equations

11
Proof of Lemma 3.1, cont.
  • Now, we plug these distributions into the
    definition of the mutual information
  • Recall f(x(T)) N(?1,?1) f(x(T)YT) N(?2,?2).
  • The entropy of a multi-variate Gaussian
    distribution is given by

12
Proof of Lemma 3.1, cont.
  • The result of Lemma 3.1 follows immediately

13
Maximization of Mutual Information
  • Formulate this as a constrained optimization
    problem
  • Given dynamics equations (target, observer,
    measurement models)
  • Initial conditions
  • Constraints (e.g. dynamics of observer vehicle)
  • Find
  • where UT-1 is the sequence of control inputs
    from k1,..T-1.

14
Application to Mobile Robotics
  • Bearings-only problem

15
References
  • Logothetis, EM Algorithms for State and
    Parameter Estimation of Stochastic Dynamical
    Systems, Ph.D thesis, U. Melbourne,
    Australia,1997.
  • Anderson and Moore, Optimal Filtering, Prentice
    Hall, New Jersey, 1979.
  • Cover and Thomas, Elements of Information
    Theory, John Wiley Sons, 1991.
Write a Comment
User Comments (0)
About PowerShow.com