Is the dot product n.K negative - PowerPoint PPT Presentation

1 / 239
About This Presentation
Title:

Is the dot product n.K negative

Description:

Consider the unit normal to the three cues indicated below. ... Any thoughts for inferring occlusion? The Extended Kalman Filter. Outline ... – PowerPoint PPT presentation

Number of Views:57
Avg rating:3.0/5.0
Slides: 240
Provided by: Steven63
Category:

less

Transcript and Presenter's Notes

Title: Is the dot product n.K negative


1
Lecture 15
2
(No Transcript)
3
(No Transcript)
4
(No Transcript)
5
Is the dot product n.K negative?
6
Is the dot product n.K negative?
K
7
Is the dot product n.K negative?
K
n
8
If not, then the cue cannot be seen by the camera.
K
n
9
Direction-cosine matrix
K
n
10
K
n
11
K
n
12
K
n
13
K
n
14
Consider the unit normal to the three cues
indicated below.
15
The unit-normal test and c.s. proximity test may
not resolve ambiguity with either of these.
16
Any thoughts for inferring occlusion?
17
The Extended Kalman Filter
18
Outline
  • Illustrate our particular use of the EKF with
    video of experimental systems
  • Develop EKF using related examples for those not
    yet familiar with it
  • Discuss some of the practical ways in which we
    have found it useful as well as some of the
    pitfalls

19
Visual guidance of a forklift
20
Visual guidance of wheelchair
21
The EKF enables both teaching
and tracking.
22
EKF needed (in part) due to limitations of
odometry
odometry Use of differential equations to
relate wheel rotation to evolving
position/orientation
With longer trajectories, these odometry-based
integrals for position deteriorate.
23
Estimation based on odometry alone is
particularly poor in the presence of
high-wheel-slip pivoting.
24
Odometry onlyorDeadReckoning
Dq1 Dq2
25
Dq1 Dq2
Real-time sample of right-wheel increments.
26
Dq1 Dq2
Real-time sample of left-wheel increments.
27
Wheel-rotation samples.
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
28
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
29
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
30
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
31
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
32
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
33
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
34
Example of stochastic modeling and analysis.
35
Consider the one-dimensional case where both
wheels turn exactly together.
36
Consider the one-dimensional case where both
wheels turn exactly together.
  • q1 q2 a

Rather than time, a will be our independent
variable, since plant model is kinematics-based.
37
(No Transcript)
38
Solution x(a) x(0) Ra
39
(No Transcript)
40
Unknown term accounts for uncertainty in the
model.
41
Unknown term accounts for uncertainty in the
model.
42
Unknown term accounts for uncertainty in the
model.
43
Smaller Q gt higher confidence in model.
44
Medium Q gt moderate confidence in model.
45
Larger Q gt little confidence in model.
46
We use this stochastic model to produce an
ongoing, Gaussian probability distribution for
the true x.
47
(No Transcript)
48
(No Transcript)
49
(No Transcript)
50
(No Transcript)
51
(No Transcript)
52
(No Transcript)
53
(No Transcript)
54
(No Transcript)
55
Gaussian probability distribution parameterized
by m and s.
56
Gaussian probability distribution parameterized
by m and s.
57
(No Transcript)
58
(No Transcript)
59
(No Transcript)
60
Expectation of x
61
f Gaussian
62
(No Transcript)
63
(No Transcript)
64
(No Transcript)
65
(No Transcript)
66
(No Transcript)
67
(No Transcript)
68
Expectation of (x-m)2
69
Expectation of (x-m)2
70
(No Transcript)
71
(No Transcript)
72
Need rate of change of m and s2.
73
Recall our stochastic o.d.e.
74
Actual or true value of x.
75
Best estimate of x mean of the pdf for x.
76
Error in the best estimate.
77
Substitute into governing equation.
78
Substitute into governing equation.
79
The best estimate or mean advances in accordance
with the deterministic equation.
80
The best estimate or mean advances in accordance
with the deterministic equation.
81
Subtract the lower from the upper.
82
Subtract the lower from the upper.
83
Consider the variance of the probability
distribution for x E(Dx2)P
84
Consider the variance of the probability
distribution for x E(Dx2)P
85
(No Transcript)
86
(No Transcript)
87
(No Transcript)
88
(No Transcript)
89
(No Transcript)
90
(No Transcript)
91
(No Transcript)
92
w is both zero-mean and uncorrelated with x(0).
93
(No Transcript)
94
(No Transcript)
95
(No Transcript)
96
This is a statement of uncorrelated
white noise.
97
(No Transcript)
98
Factor of ½ comes from symmetry of d function.
99
(No Transcript)
100
stochastic equations
101
(No Transcript)
102
(No Transcript)
103
Note that our level of certainty diminishes with
more and more wheel rotation.
104
(No Transcript)
105
E(x(a))E(x(0))Ra
106
E(x(a))E(x(0))Ra0.00.5a
107
E(x(a))E(x(0))Ra0.00.5a
108
E(x(a))E(x(0))Ra0.00.5a P(a)P(0)Qa
109
E(x(a))E(x(0))Ra0.00.5a
P(a)P(0)Qa0.00.12a
110
E(x(a))E(x(0))Ra0.00.5a
P(a)P(0)Qa0.00.12a
111
This means that position certainty is diminishing
with distance traveled.
112
The variance of the probability distribution
increases linearly with distance traveled, a.
113
The variance of the probability distribution
increases linearly with distance traveled, a.
114
The variance of the probability distribution
increases linearly with distance traveled, a.
115
The variance of the probability distribution
increases linearly with distance traveled, a.
116
The variance of the probability distribution
increases linearly with distance traveled, a.
117
The variance of the probability distribution
increases linearly with distance traveled, a.
118
The variance of the probability distribution
increases linearly with distance traveled, a.
119
The variance of the probability distribution
increases linearly with distance traveled, a.
120
The variance of the probability distribution
increases linearly with distance traveled, a.
121
The variance of the probability distribution
increases linearly with distance traveled, a.
122
E(x(10))5.0 P(10)1.2
123
Determine the probability that the true value of
x(10) is within plus or minus 0.1 of the mean of
5.0.
124
(No Transcript)
125
We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
126
We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
127
We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
128
We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
129
How do we use an observation _at_ a10 to alter the
apriori values of
130
stochastic observation equation
131
Deterministic portion based on camera model.
132
Random portion additive, and Gaussian (as with
the process equations) with E(v)0, E(v2)R.
133
Random portion additive, and Gaussian (as with
the process equations) with E(v)0, E(v2)R.
134
Bayes theorem
135
(No Transcript)
136
Apriori probability density function for x(10)
that we already know
137
This is our aposteriori probability density
function because it is conditioned on the
observation z.
138
This is our aposteriori probability density
function because it is conditioned on the
observation z.
139
(No Transcript)
140
(No Transcript)
141
(No Transcript)
142
(No Transcript)
143
(No Transcript)
144
(No Transcript)
145
(No Transcript)
146
pdf for x(10) conditioned on observation z is our
aposteriori pdf.
147
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
148
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
149
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
150
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
151
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
152
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
153
The highest value of this pdf is therefore the
mean of our aposteriori pdf.
154
Note that x only appears in the arguments of the
exponentials.
155
Note that x only appears in the arguments of the
exponentials.
156
Note that x only appears in the arguments of the
exponentials.
157
or
158
or
159
Return for a moment to the exponential of the
aposteriori function
160
Return for a moment to the exponential of the
aposteriori function
This must be equal to the exponential part of
161
It follows that
162
It follows that
Compare this to our updated best estimate of
x(10)
163
It follows that
Compare this to our updated best estimate of
x(10)
It follows that we may write
164
It follows that
Compare this to our updated best estimate of
x(10)
It follows that we may write
165
It follows that
Innovation _at_ a10.
It follows that we may write
166
In the absence of new samples, the variance of
the probability distribution continues to grow.
167
This means that position certainty is diminishing
with distance traveled.
168
(No Transcript)
169
(No Transcript)
170
(No Transcript)
171
(No Transcript)
172
With occasional observations, the rate of
increase of uncertainty may be reduced.
173
With more frequent observations, uncertainty can
be kept near zero.
174
At the same time, best estimates of the mean of
the probability density function for x are being
adjusted with each observation.
175
(No Transcript)
176
(No Transcript)
177
(No Transcript)
178
(No Transcript)
179
(No Transcript)
180
(No Transcript)
181
a z . 1.0
-0.30766 2.0 -0.27405
182
a z . 1.0
-0.30766 2.0 -0.27405
183
Apriori
a z . 1.0
-0.30766 2.0 -0.27405
184
Apriori
a z . 1.0
-0.30766 2.0 -0.27405
185
a z . 1.0
-0.30766 2.0 -0.27405
186
a z . 1.0
-0.30766 2.0 -0.27405
187
a z . 1.0
-0.30766 2.0 -0.27405
188
a z . 1.0
-0.30766 2.0 -0.27405
189
a z . 1.0
-0.30766 2.0 -0.27405
190
a z . 1.0
-0.30766 2.0 -0.27405
191
a z . 1.0
-0.30766 2.0 -0.27405
192
a z . 1.0
-0.30766 2.0 -0.27405
193
a z . 1.0
-0.30766 2.0 -0.27405
194
a z . 1.0
-0.30766 2.0 -0.27405
195
a z . 1.0
-0.30766 2.0 -0.27405
196
a z . 1.0
-0.30766 2.0 -0.27405
197
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
198
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
Note that, since this is a simulation, we may
create data consistent with any particular
real event.
199
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
These data happen to be consistent with a
vehicle with a leaky tire. R ranges from .55 to
.45 over the course of the maneuver.
200
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
The data are also consistent with an initial
position that is different from the assumed zero.
The true (but unknown) initial position is
x(0) -0.1.
201
We can accomodate both these unmodeled, unknown
effects covered the initial position error
via an initial variance P(0) different from
zero and the changing wheel radius by nonzero
process noise variance Q.
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
202
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
However, the extended Kalman filter also allows
us to estimate R together with the current
position x, by defining a 2-dimensional state
vector, x1x, x2R.
203
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
In such a case the same data could be applied in
the estimation of a 2- random-variable
joint- probability-density function.
204
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
In such a case the same data could be applied in
the estimation of a 2- random-variable
joint- probability-density function.
205
(No Transcript)
206
(No Transcript)
207
(No Transcript)
208
(No Transcript)
209
(No Transcript)
210
(No Transcript)
211
?
212
(No Transcript)
213
(No Transcript)
214
(No Transcript)
215
(No Transcript)
216
(No Transcript)
217
(No Transcript)
218
(No Transcript)
219
(No Transcript)
220
(No Transcript)
221
P (Po-1 HTR-1H)-1
222
P (Po-1 HTR-1H)-1
223
Trajectory reality vs. best estimates.
224
Trajectory reality vs. best estimates.
225
Trajectory reality vs. best estimates.
226
Trajectory reality vs. best estimates.
227
Note that it takes a while for these aposteriori
estimates for R to lock on to the underlying
reality.
228
For the first couple of corrections, they
actually get worse before improving.
229
Estimates of the state element of interest, x1,
are not necessarily improved by estimating
parameters such as R in real time.
230
Some practical considerations.
231
Departure, w(a), from the ideal no-slip
assumption (i.e. our process model) is
deterministic, but too complicated to model.
232
Unlike w(a) model error is actually deterministic
and generally not additive c.l.t. may apply
anyway.
233
Ensuring that E(v)0, however, takes some thought.
234
Here observation biases in teaching
are largely replicated in tracking.
235
Since it is difficult to create really good plant
models a priori, it is tempting to try to
estimate everything.
236
The bad effects of (partially) neglected
nonlinearily are generally exacerbated when more
random estimation variables are introduced.
237
The use of estimates to create/identify
observations (which are in turn used to modify
these same observations.)
  • Very tempting and convenient.
  • Requires maintaining of high precision (low P).
  • Once this slips away it can be impossible to
    recover.

238
For us the EKF has been very useful, and has
worked well
  • It is efficient numerically.
  • It automatically weights observations in a way
    that takes advantage of the information they
    contain for each individual element of the state,
    especially in light of information contained in
    prior observations.
  • It balances observation error against model or
    plant error.
  • It is forgiving insofar as choice of R, Q and
    P(0) values.
  • It has always been a very stable estimator

239
http//www.bastiansolutions.com/products/automated
-guided-vehicles/default.asp
Write a Comment
User Comments (0)
About PowerShow.com