Tải bản đầy đủ (.pdf) (6 trang)

Tài liệu Tracking and Kalman filtering made easy P6 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (50.72 KB, 6 trang )

6
EXPANDING-MEMORY
(GROWING-MEMORY)
POLYNOMIAL FILTERS
6.1 INTRODUCTION
The fixed-memory flter described in Chapter 5 has two important disadvan-
tages. First, all the data obtained over the last L þ 1 observations have to be
stored. This can result in excessive memory requirements in some instances.
Second, at each new observation the last L þ 1 data samples have to be
reprocessed to obtain the update estimate with no use being made of the
previous estimate calculations. This can lead to a large computer load. When
these disadvantages are not a problem, the fixed-memory filter would be used
generally. Two filters that do not have these two disadvantages are the
expanding-memory filter and the fading memory filter. The expanding
memory filter is, as discussed in Section 1.2.10 and later in Section 7.6,
suitable for track initiation and will be covered in detail in this chapter.
The fading memory filter as discussed in Chapter 1 is used for steady state
tracking, as is the fixed-memory filter, and will be covered in detail in
Chapter 7.
Before proceeding it is important to highlight the advantages of the
fixed-memory filter. First, if bad data is acquired, the effect on the filter
will only last for a finite time because the filter has a finite memory of
duration L þ 1; that is, the fixed-memory filter has a finite transient
response. Second, fixed-memory filters of short duration have the advantage
of allowing simple processor models to be used when the actual process
model is complex or even unknown because simple models can be used
over short observation intervals. These two advantages are also obtained
when using a short memory for the fading-memory filter discussed in
Chapter 7.
233
Tracking and Kalman Filtering Made Easy. Eli Brookner


Copyright # 1998 John Wiley & Sons, Inc.
ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)
6.2 EXTRAPOLATION FROM FIXED-MEMORY FILTER
RESULTS
All the results given in Chapter 5 for the fixed-memory filter apply directly to
the expanding-memory filter except now L is increasing with time instead of
being fixed. To allow for the variation of L, it is convenient to replace the
variable L by n and to have the first observation y
nÀL
be designated as y
0
. The
measurement vector Y
ðnÞ
of (5.2-1) becomes
Y
ðnÞ
¼½y
n
; y
nÀ1
; ...; y
0

T
ð6:2-1Þ
where n is now an increasing variable. The filter estimate is now based on all
the n þ 1 past measurements. All the equations developed in Chapter 5 for the
fixed-memory state vector estimate covariance matrix [such as (5.6-4), (5.6-7),
and (5.8-1)] and systematic error [such as (5.10-2) and (5.10-3)] apply with L

replaced by n. The least-squares polynomial fit equations given by (5.3-11),
(5.3-13), and (5.5-3) also applies with L again replaced by n.
In this form the smoothing filter has the disadvantage, as already mentioned,
of generally not making use of any of the previous estimate calculations in
order to come up with the newest estimate calculation based on the latest
measurement. An important characteristic of the expanding-memory filter, for
which n increases, is that it can be put in a recursive form that allows it to make
use of the last estimate plus the newest observation y
n
to derive the latest
estimate with the past measurements ð y
0
; y
1
; ...; y
nÀ1
Þ not being needed. This
results in a considerable savings in computation and memory requirements
because the last n measurements do not have to be stored, only the most recent
state vector estimate, X
Ã
n;nÀ1
. This estimate contains all the information needed
relative to the past measurements to provide the next least-squares estimate.
The next section gives the recursive form of the least-squares estimate
orthogonal Legendre filter.
6.3 RECURSIVE FORM
It can be shown [5, pp. 348–362] after quite some manipulation that the filter
form given by (5.3-13) can be put in the recursive forms of Table 6.3-1 for a
one-state predictor when m ¼ 0; 1; 2; 3. The results are given in terms of the

scaled state vector Z
Ã
nþ1;n
[see (5.4-12)]. As indicated before only the last one-
state update vector Z
Ã
n;nÀ1
has to be remembered to do the update. This is an
amazing result. It says that the last one-step update state vector Z
Ã
n;nÀ1
of
dimension m þ 1 contains all the information about the previous n observations
in order to obtain the linear least-squares polynomial fit to the past data Y
ðnÀ1Þ
and the newest measurement y
n
. Stated another way, the state vector Z
Ã
n;nÀ1
is a
sufficient statistic [8, 9, 100].
234
EXPANDING-MEMORY (GROWING-MEMORY) POLYNOMIAL FILTERS
TABLE 6.3-1. Expanding-Memory Polynomial Filter
Define
z
Ã
0
z

Ã
1
z
Ã
2
z
Ã
3
0
B
B
B
B
B
B
@
1
C
C
C
C
C
C
A
nþ1;n
¼
x
Ã
T
_

x
Ã
T
2
2!

x
Ã
T
3
3!
_x
Ã
0
B
B
B
B
B
B
@
1
C
C
C
C
C
C
A
nþ1;n

"
n
 y
n
À z
Ã
0
ÀÁ
n;nÀ1
Degree 0
a
:
z
Ã
0
ÀÁ
nþ1;n
¼ z
Ã
0
ÀÁ
n;nÀ1
þ
1
n þ 1
"
n
Degree 1
a
:

z
Ã
1
ÀÁ
nþ1;n
¼ z
Ã
1
ÀÁ
n;nÀ1
þ
6
ðn þ 2Þðn þ 1Þ
"
n
z
Ã
0
ÀÁ
nþ1;n
¼ z
Ã
0
ÀÁ
n;nÀ1
þ z
Ã
1
ÀÁ
nþ1;n

þ
2ð2n þ 1Þ
ðn þ 2Þðn þ 1Þ
"
n
Degree 2
a
:
z
Ã
2
ÀÁ
nþ1;n
¼ z
Ã
2
ÀÁ
n;nÀ1
þ
30
ðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
z
Ã
1
ÀÁ
nþ1;n
¼ z
Ã

1
ÀÁ
n;nÀ1
þ 2 z
Ã
2
ÀÁ
nþ1;n
þ
18ð2n þ 1Þ
ðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
z
Ã
0
ÀÁ
nþ1;n
¼ z
Ã
0
ÀÁ
n;nÀ1
þðz
Ã
1
Þ
nþ1; n
À z
Ã

2
ÀÁ
nþ1;n
þ
3ð3n
2
þ 3n þ 2Þ
ðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
Degree 3
a
:
z
Ã
3
ÀÁ
nþ1;n
¼ z
Ã
3
ÀÁ
n;nÀ1
þ
140
ðn þ 4Þðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
z
Ã

2
ÀÁ
nþ1;n
¼ z
Ã
2
ÀÁ
n;nÀ1
þ 3 z
Ã
3
ÀÁ
nþ1;n
þ
120ð2n þ 1Þ
ðn þ 4Þðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
z
Ã
1
ÀÁ
nþ1;n
¼ z
Ã
1
ÀÁ
n;nÀ1
þ 2 z
Ã

2
ÀÁ
nþ1;n
À 3 z
Ã
3
ÀÁ
nþ1;n
þ
20ð6n
2
þ 6n þ 5Þ
ðn þ 4Þðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
z
Ã
0
ÀÁ
nþ1;n
¼ z
Ã
0
ÀÁ
n;nÀ1
þ z
Ã
1
ÀÁ
nþ1;n

À z
Ã
2
ÀÁ
nþ1;n
þðz
Ã
3
Þ
nþ1;n
þ
8ð2n
3
þ 3n
2
þ 7n þ 3Þ
ðn þ 4Þðn þ 3Þðn þ 2Þðn þ 1Þ
"
n
a
In all cases, n starts at zero.
Source: From Morrison [5].
RECURSIVE FORM
235
The filter equations given in Table 6.3-1 for m ¼ 1 are exactly the same as
those of the g–h growing-memory filter originally given in Section 1.2.10 for
track initiation; compare (1.2-38a) and (1.2-38b) with the expressions for g and
h given in Table 6.3-1 for m ¼ 1; see problem 6.5-2. The filter of Section 1.2.10
and Table 6.3-1 for m ¼ 1 are for a target characterized as having a constant-
velocity dynamics model. The equations in Table 6.3-1 for m ¼ 2 are for when

the target dynamics has a constant acceleration and corresponds to the g–h–k
growing-memory filter. The equations for m ¼ 3 are the corresponding
equations for when the target dynamics have a constant jerk, that is, a constant
rate of change of acceleration. Practically, filters of higher order than m ¼ 3 are
not warranted. Beyond jerk are the yank and snatch, respectively, the fourth and
fifth derivatives of position. The equation for m ¼ 0 is for a stationary target. In
this case the filter estimate of the target position is simply an average of the
n þ 1 measurements as it should be; see (4.2-23) and the discussion
immediately before it. Thus we have developed the growing-memory g–h
filter and its higher and lower order forms from the theory of least-squares
estimation. In the next few sections we shall present results relative to the
growing-memory filter with respect to its stability, track initiation, estimate
variance, and systematic error.
6.4 STABILITY
Recursive differential equations such as those of Table 6.3-1 are called stable if
any transient responses induced into them die out eventually. (Stated more
rigorously, a differential equation is stable if its natural modes, when excited,
die out eventually.) It can be shown that all the recursive differential expanding-
memory filter equations of Table 6.3-1 are stable.
6.5 TRACK INITIATION
The track initiation of the expanding-memory filters of Table 6.3-1 needs an
initial estimate of Z
Ã
n;nÀ1
for some starting n. If no a prori estimate is available,
then the first m þ 1 data points could be used to obtain an estimate for Z
Ã
m;mÀ1
,
where m is the order of the expanding-memory filter being used. This could be

done by simply fitting an mth-order polynomial filter through the first m þ 1
data points, using, for example, the Lagrange interpolation method [5].
However, an easier and better method is available. It turns out that we can
pick any arbitrary value for Z
Ã
0;À1
and the growing memory filter will yield the
right value for the scaled state vector Z
Ã
mþ1;m
at time m. In fact the estimate
Z
Ã
mþ1;m
will be least-squares mth-order polynomial fit to the first m þ 1 data
samples independent of the value chosen for Z
Ã
0;À1
; see problems 6.5-1 and
6.5-2. This is what we want. Filters having this property are said to be self-
starting.
236
EXPANDING-MEMORY (GROWING-MEMORY) POLYNOMIAL FILTERS
6.6 VARIANCE REDUCTION FACTOR
For large n the VRF for the expanding-memory filter can be obtained using
(5.8-1) and Tables (5.8-1) and (5.8-2) with L replaced by n. Expressions for the
VRF for arbitrary n are given in Table 6.6-1 for the one-step predictor when
m ¼ 0, 1, 2, 3. Comparing the one-step predictor variance of Table 6.6-1 for
m ¼ 1 with that given in Section 1.2.10 for the growing-memory filter indicates
that they are identical, as they should be; see (1.2-42). Also note that the same

variance is obtained from (5.6-5) for the least-squares fixed-memory filter.
TABLE 6.6-1. VRF for Expanding-Memory One-Step Predictors
a
(Diagonal Elements of S
Ã
nþ1;n
)
Degree (m) Output VRF
0 x
Ã
nþ1;n
1
ðn þ 1Þ
ð1Þ
1
_
x
Ã
nþ1;n
12
T
2
ðn þ 2Þ
ð3Þ
x
Ã
nþ1;n
2ð2n þ 3Þ
ðn þ 1Þ
ð2Þ

2 x
Ã
nþ1;n
720
T
4
ðn þ 3Þ
ð5Þ
_
x
Ã
nþ1;n
192n
2
þ 744n þ 684
T
2
ðn þ 3Þ
ð5Þ
x
Ã
nþ1;n
9n
2
þ 27n þ 24
ðn þ 1Þ
ð3Þ
3 _x
Ã
nþ1;n

100; 800
T
6
ðn þ 4Þ
ð7Þ

x
Ã
nþ1;n
25; 920n
2
þ 102; 240n þ 95; 040
T
4
ðn þ 4Þ
ð7Þ
_
x
Ã
nþ1;n
1200n
4
þ 10; 200n
3
þ 31; 800n
2
þ 43; 800n þ 23; 200
T
2
ðn þ 4Þ

ð7Þ
x
Ã
nþ1;n
16n
3
þ 72n
2
þ 152n þ 120
ðn þ 1Þ
ð4Þ
a
Recall that x
ðmÞ
¼ xðx À 1Þðx À 2ÞÁÁÁðx À m þ 1Þ; see (5.3-4a).
Source: (From Morrison [5].)
VARIANCE REDUCTION FACTOR
237

×