Tải bản đầy đủ (.pdf) (4 trang)

Tài liệu Tracking and Kalman filtering made easy P9 pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (43.44 KB, 4 trang )

9
GENERAL RECURSIVE MINIMUM-
VARIANCE GROWING-MEMORY
FILTER (BAYES AND KALMAN
FILTERS WITHOUT TARGET
PROCESS NOISE)
9.1 INTRODUCTION
In Section 6.3 we developed a recursive least-squares growing memory-filter for
the case where the target trajectory is approximated by a polynomial. In this
chapter we develop a recursive least-squares growing-memory filter that is not
restricted to having the target trajectory approximated by a polynomial [5. pp.
461–482]. The only requirement is that Y
nÀi
, the measurement vector at time
n À i, be linearly related to X
nÀi
in the error-free situation. The Y
nÀi
can be
made up to multiple measurements obtained at the time n À i as in (4.1-1a)
instead of a single measurement of a single coordinate, as was the case in
(4.1-20), where Y
nÀ1
¼½y
nÀ1
. The Y
nÀi
could, for example, be a two-
dimensional measurement of the target slant range and Doppler velocity.
Extensions to other cases, such as the measurement of three-dimensional polar
coordinates of the target, are given in Section 16.2 and Chapter 17.


Assume that at time n we have L þ 1 observations Y
n
, Y
nÀ1
; ...; Y
nÀL
obtained at, respectively, times n; n À 1; ...; n À L. These L þ 1 observations
are represented by the matrix Y
ðnÞ
of (4.1-11a). Next assume that at some later
time n þ 1 we have another observation Y
nþ1
given by
Y
nþ1
¼ MÈX
n
þ N
nþ1
ð9:1-1Þ
Assume also that at time n we have a minimum-variance estimate of X
Ã
n;n
based
on the past L þ 1 measurements represented by Y
ðnÞ
. This estimate is given by
(4.1-30) with W
n
given by (4.5-4).

In turn the covariance matrix S

Ã
n;n
is given by (4.5-5). Now to determine the
new minimum-variance estimate X
Ã
nþ1;nþ1
from the set of data consisting of Y
ðnÞ
260
Tracking and Kalman Filtering Made Easy. Eli Brookner
Copyright # 1998 John Wiley & Sons, Inc.
ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)
and Y
nþ1
, one could again use (4.1-30) and (4.5-4) with Y
ðnÞ
now replaced by
Y
ðnþ1Þ
, which is Y
ðnÞ
of (4.1-11a) with Y
nþ1
added to it. Correspondingly the
matrices T and R
ðnÞ
would then be appropriately changed to account for the
increase in Y

ðnÞ
to include Y
nþ1
. This approach, however, has the disadvantage
that it does not make use of the extensive computations carried out to compute
the previously minimum-variance estimate X
Ã
n;n
based on the past data Y
ðnÞ
.
Moreover, it turns out that if Y
nþ1
is independent of Y
ðnÞ
, then the minimum-
variance estimate of X
Ã
nþ1;nþ1
can be obtained directly from Y
nþ1
and X
Ã
n;n
and
their respective variances R
nþ1
and S
Ã
n;n

. This is done by obtaining the
minimum-variance estimate of X
Ã
nþ1;nþ1
using Y
nþ1
and X
Ã
n;n
together with their
variances. No use is made of the original data set Y
ðnÞ
. This says that the
estimate X
Ã
n;n
and its covariance matrix S
Ã
n;n
contain all the information we need
about the previous L þ 1 measurements, that is, about Y
ðnÞ
. Here, X
Ã
n;n
and its
covariance matrix are sufficient statistics for the information contained in the
past measurement vector Y
ðnÞ
together with its covariance matrix R

ðnÞ
. (This is
similar to the situation where we developed the recursive equations for the
growing- and fading-memory filters in Sections 6.3, 7.2, and 1.2.6.)
9.2 BAYES FILTER
The recursive form of the minimum variance estimate based on Y
nþ1
and X
Ã
n;n
is
given by [5, p. 464]
X

Ã
nþ1;nþ1
¼ X

Ã
nþ1;n
þ H

nþ1
ðY
nþ1
À MX

Ã
nþ1;n
Þð9:2-1Þ

where
H

nþ1
¼ S

Ã
nþ1;nþ1
M
T
R
À1
1
ð9:2-1aÞ
S

Ã
nþ1;nþ1
¼½ðS

Ã
nþ1;n
Þ
À1
þ M
T
R
À1
1
M

À1
ð9:2-1bÞ
S

Ã
nþ1;n
¼ ÈS

Ã
n;n
È
T
ð9:2-1cÞ
X

Ã
nþ1;n
¼ ÈX

Ã
n;n
ð9:2-1dÞ
The above recursive filter is often referred to in the literature as the Bayes filter
(this is because it can also be derived using the Bayes theorem on conditional
probabilities [128].) The only requirement needed for the recursive minimum-
variance filter to apply is that Y
nþ1
be independent of Y
ðnÞ
. When another

measurement Y
nþ2
is obtained at a later time n þ 2, which is independent of the
previous measurements, then the above equations (indexed up one) can be used
again to obtain the estimate X

Ã
nþ2;nþ2
.IfY
ðnÞ
and Y
nþ1
are dependent, the Bayes
filter could still be used except that it would not now provide the minimum-
variance estimate. If the variates are reasonably uncorrelated though, the
estimate should be a good one.
BAYES FILTER
261
9.3 KALMAN FILTER (WITHOUT PROCESS NOISE)
If we apply the inversion lemma given by (2.6-14) to (9.2-1b), we obtain after
some manipulations the following equivalent algebraic equation for the
recursive minimum-variance growing-memory filter estimate [5, p. 465]:
X

Ã
n;n
¼ X

Ã
n;nÀ1

þ H

n
ðY
n
À MX
Ã
n;nÀ1
Þð9:3-1Þ
where
H

n
¼ S

Ã
n;nÀ1
M
T
ðR
1
þ MS
Ã
n;nÀ1
M
T
Þ
À1
ð9:3-1aÞ
S


Ã
n;n
¼ð1 À H
n
MÞS

Ã
n;nÀ1
ð9:3-1bÞ
S

Ã
n;nÀ1
¼ ÈS

Ã
nÀ1;nÀ1
È
T
ð9:3-1cÞ
X

Ã
n;nÀ1
¼ ÈX

Ã
nÀ1;nÀ1
ð9:3-1dÞ

The preceding Kalman filter equations are the same as given by (2.4-4a) to
(2.4-4j) except that the target model dynamic noise (U
n
or equivalently its
covariance matrix Q
n
) is not included. Not including the target model dynamic
noise in the Kalman filter can lead to computational problems for the Kalman
filter [5, Section 12.4]. This form of the Kalman filter is not generally used for
this reason, and it is not a form proposed by Kalman. The Kalman filter with the
target process noise included is revisited in Chapter 18.
9.4 COMPARISON OF BAYES AND KALMAN FILTERS
As discussed in Sections 2.3, 2.5, and 2.6, the recursive minimum-variance
growing-memory filter estimate is a weighted sum of the estimates Y
nþ1
and
X
Ã
nþ1;n
with the weighting being done according to the importance of the two
estimates; see (2.3-1), (2.5-9), and (2.6-7). Specifically, it can be shown that the
recursive minimum-variance estimate can be written in the form [5, p. 385]
X

Ã
nþ1;nþ1
¼ S

Ã
nþ1;nþ1

½ðS

Ã
nþ1;n
Þ
À1
X

Ã
nþ1;n
þ MR
À1
1
Y
nþ1
ð9:4-1Þ
If the covariance matrix of y
nþ1
is dependent on n, then R
1
is replaced by R
nþ1
.
The recursive minimum-variance Bayes and Kalman filter estimates are
maximum-likelihood estimates when Y
nþ1
and Y
ðnÞ
are uncorrelated and
Gaussian. All the other properties given in Section 4.5 for the minimum-

variance estimate also apply. The Kalman filter has the advantage over the
Bayes filter of eliminating the need for two matrix inversions in (9.2-1b), which
have a size equal to the state vector X
Ã
n;n
[which can be large, e.g., 10 Â 10 for
262
GENERAL RECURSIVE MINIMUM-VARIANCE GROWING-MEMORY FILTER
the example (2.4-6)]. The Kalman filter on the other hand only requires a single
matrix inversion in (9.3-1a) of an order equal to the measurement vector Y
nþ1
(which has a dimension 4 Â 4) for the example of Section 2.4 where the target
is measured in polar coordinates; see (2.4-7)). It is also possible to incorporate
these four measurements one at a time if they are independent of each other. In
this case no matrix inversion is needed.
9.5 EXTENSION TO MULTIPLE MEASUREMENT CASE
In the Bayes and Kalman filters it is not necessary for Y
nþ1
to be just a single
measurement at time t
nþ1
. The term Y
nþ1
could be generalized to consist of
L þ 1 measurements at L þ 1 times given by
Y
nþ1
; Y
n
; Y

nÀ1
; ...; Y
nÀLþ1
ð9:4-2Þ
For this more general case we can express the above L þ 1 measurements as a
vector given by
Y
ðnþ1Þ
¼
Y
nþ1
----
Y
n
----
.
.
.
----
Y
nÀLþ1
2
6
6
6
6
6
6
6
6

4
3
7
7
7
7
7
7
7
7
5
ð9:4-3Þ
Then from (4.1-5) through (4.1-10), (4.1-11) follows. It then immediately
follows that (9.2-1) through (9.2-1d) and (9.3-1) through (9.3-1d) apply with M
replaced by T of (4.1-11b) and Y
nþ1
replaced by Y
ðnþ1Þ
of (9.4-3).
EXTENSION TO MULTIPLE MEASUREMENT CASE
263

×