Tải bản đầy đủ (.pdf) (13 trang)

Tài liệu Tracking and Kalman filtering made easy P7 pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (92.69 KB, 13 trang )

7
FADING-MEMORY (DISCOUNTED
LEAST-SQUARES) FILTER
7.1 DISCOUNTED LEAST-SQUARES ESTIMATE
The fading-memory filter introduced in Chapter 1, is similar to the fixed-
memory filter in that it has essentially a finite memory and is used for tracking a
target in steady state. As indicated in Section 1.2.6, for the fading-memory filter
the data vector is semi-infinite and given by
Y
ðnÞ
¼½y
n
; y
nÀ1
; ...
T
ð7:1-1Þ
The filter realizes essentially finite memory for this semi-infinite data set by
having, as indicated in section 1.2.6, a fading-memory. As for the case of the
fixed-memory filter in Chapter 5, we now want to fit a polynomial p
Ã
¼
½ p
Ã
ðrÞ
n
[see (5.2-3), e.g.)] to the semi-infinite data set given by (7.1-1).
Here, however, it is essential that the old, stale data not play as great a role in
determining the polynomial fit to the data, because we now has a semi-infinite
set of measurements. For example, if the latest measurement is at time n and the
target made a turn at data sample n-10, then we do not want the samples prior


to the n À 10 affecting the polynomial fit as much. The least-squares
polynomial fit for the fixed-memory filter minimized the sum of the squares
of the errors given by (5.3-7). If we applied this criteria to our filter, then
the same importance (or weight) would be given an error resulting from
the most recent measurement as well as one resulting for an old
measurement. To circumvent this undesirable feature, we now weight the
error due to the old data less than that due to recent data. This is achieved
using a discounted, least-squares weighting as done in (1.2-34); that is, we
239
Tracking and Kalman Filtering Made Easy. Eli Brookner
Copyright # 1998 John Wiley & Sons, Inc.
ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)
minimize
e
n
¼
X
1
r¼0
fy
nÀr
À p
Ã
ðrÞ

n
g
2

r

ð7:1-2Þ
where here positive r is now running backward in time and
0 <1 ð7:1-2aÞ
The parameter  here determines the discounting of the old data errors, as done
in Section 1.2.6. For the most recent measurement y
n
; r ¼ 0 in (7.1-2) and

0
¼ 1 with the error based on the most recent measurement at time r ¼ 0
being given maximum weight. For the one-time-interval-old data y
nÀ1
; r ¼ 1
and 
r
¼  so that these one-time-interval-old data are not given as much
weight (because 0 <1), with the result that the error for the polynomial
fit to this data point can be greater than it was for the most recent data point
in obtaining the best estimating polynomial, which satisfies (7.1-2). For the
two-time-interval-old data point given by y
nÀ2
, r ¼ 2 and 
r
¼ 
2
, and the error
for this time sample can even be bigger, and so forth. Thus with this weighting
the errors relative to the fitting polynomial are discounted more and more as the
data gets older and older. The minimum of (7.1-2) gives us what we called in
Section 1.2.6 a discounted least-squares fit of the polynomial to the semi-

infinite data set. The memory of the resulting filter is dependent on . The
smaller  is the shorter the filter memory because the faster the filter discounts
the older data. This filter is also called the fading-memory filter. It is a
generalization of the fading-memory g–h filter of Section 1.2.6. The g–h filter of
Section 1.2.6 is of degree m ¼ 1, here we fit a polynomial p
Ã
of arbitrary
degree m.
To find the polynomial fit p
Ã
of degree m that minimizes (7.1-2), an
orthogonal polynomial representation of p
Ã
is used, just as was done for the
fixed-memory filter when minimizing (5.3-7) by the use of the Legendre
polynomial; see (5.3-1). Now, however, because the data is semi-infinite and
because of the discount weighting by 
r
, a different orthogonal polynomial is
needed. The discrete orthogonal polynomial used now is the discrete Laguerre
polynomial described in the next section.
7.2 ORTHOGONAL LAGUERRE POLYNOMIAL APPROACH
The discounted least-squares estimate polynomial p
Ã
that minimizes (7.1-2)
is represented by the sum of normalized orthogonal polynomials as done in
(5.3-1) except that the orthonormal discrete Laguerre polynomial 
j
ðrÞ of
degree j is defined by the equations [5, pp. 500–501]


j
ðrÞ¼K
j
p
j
ðrÞð7:2-1Þ
240
FADING-MEMORY (DISCOUNTED LEAST-SQUARES) FILTER
where
K
j
¼
1
c
j
¼
1 À 

j

1
2
ð7:2-1aÞ
c
j
¼ cð j;Þð7:2-bÞ
½c ð j;Þ
2
¼


j
1 À 
ð7:2-1cÞ
p
j
ðrÞ¼ pðr; j;Þ
¼ 
j
X
j
¼0
ðÀ1Þ

j


1 À 



r


ð7:2-1dÞ
where pðr; j;Þ is the orthogonal discrete Laguerre polynomial, which obeys the
following discrete orthogonal relationship:
X
1
r¼0

pðr; i;Þpðr; j;Þ¼
0 j 6¼ i
½c ð j;Þ
2
j ¼ i

ð7:2-2Þ
Tabel 7.2-1 gives the first four discrete Laguerre polynomials. The orthonormal
Laguerre polynomial 
j
ðrÞ obeys the orthonormal relationship
X
1
r¼0

i
ðrÞ
j
ðrÞ
r
¼ 
ij
ð7:2-3Þ
TABLE 7.2-1. First Four Orthogonal Discrete Laguerre Polynomials
pðx; 0;Þ¼1
pðx; 1;Þ¼ 1 À
1 À 

x


pðx; 2;Þ¼
2
1 À 2
1 À 


x þ
1 À 


2
xðx À 1Þ
2!
"#
pðx; 3;Þ¼
3
1 À 3
1 À 


x þ 3
1 À 


2
xðx À 1Þ
2!
À
1 À 



3
xðx À 1Þðx À 2Þ
3!
"#
pðx; 4;Þ¼
4
"
1 À 4
1 À 


x þ 6
1 À 


2
xðx À 1Þ
2!
À 4
1 À 


3
xðx À 1Þðx À 2Þ
3!
þ
1 À 



4
xðx À 1Þðx À 2Þðx À 3Þ
4!
#
ORTHOGONAL LAGUERRE POLYNOMIAL APPROACH
241
Substituting (7.2-1) into (5.3-1), and this in turn into (7.1-2), and performing the
minimization yields [5, p. 502]
ð
j
Þ
n
¼
X
1
k¼0
y
nÀk

j
ðkÞ
k
0 j m ð7:2-4Þ
However, the above solution is not recursive. After some manipulation [5, pp.
504–506], it can be shown that the discounted least-squares mth-degree
polynomial filter estimate for the ith derivative of x, designated as D
i
x
Ã
,is

given by the recursive solution
ðD
i
x
Ã
Þ
nÀr;n
¼À
1
T

i
X
m
j¼0
d
i
dr
i

j
ðrÞ

K
j

j
ð1 À qÞ
j
ð1 À qÞ

jþ1
()
y
n
ð7:2-5Þ
where q is the backward-shifting operator given by
q
k
y
n
¼ y
nÀk
ð7:2-5aÞ
for k an integer and q has the following properties:
ð1 À qÞ
2
¼ 1 À 2q þ q
2
ð7:2-6Þ
ð1 À qÞ
À1
¼ 1 þ q þ q
2
þÁÁÁ ð7:2-7Þ
¼
X
1
k¼0
q
k

ð À qÞ
À1
¼ 
À1
1 À
q


À1
¼ 
À1
1 þ
q

þ
q


2
þÁÁÁ
!
ð7:2-8Þ
It is not apparent at first that (7.2-5) provides a recursive solution for D
i
x
Ã
.To
verify this the reader should write out (7.2-5) for i ¼ 0 and m ¼ 1. Using (7.2-5)
the recursive equations of Table 7.2-2 for the fading-memory filters are obtained
for m ¼ 0; ...; 4 [5, pp. 506–507].

The filter equations for m ¼ 1 are identical to the fading memory g–h filter of
Section 1.2.6. Specifically, compare g and h of (1.2-35a) and (1.2-35b) with
those of Table 7.2-2 for m ¼ 1. Thus we have developed the fading-memory
g–h filter from the least-squares estimate as desired. In the next sections we
shall discuss the fading-memory filter stability, variance, track initiation, and
systematic error, as well as the issue of balancing systematic and random
prediction errors and compare this filter with the fixed-memory filter. Note that
the recursive fading-memory filters given by Table 7.2-2 only depend on the
242
FADING-MEMORY (DISCOUNTED LEAST-SQUARES) FILTER
TABLE 7.2-2. Fading-Memory Polynomial Filter
Define
z
Ã
0
z
Ã
1
z
Ã
2
z
Ã
3
z
Ã
4
0
B
B

B
B
B
B
B
B
B
B
B
B
@
1
C
C
C
C
C
C
C
C
C
C
C
C
A
nþ1;n
¼
x
Ã
TDx

Ã
T
2
2!
D
2
x
Ã
T
3
3!
D
3
x
Ã
T
4
4!
D
4
x
Ã
0
B
B
B
B
B
B
B

B
B
B
B
B
@
1
C
C
C
C
C
C
C
C
C
C
C
C
A
nþ1;n
"
n
¼ y
n
À z
Ã
0

n;nÀ1

Degree 0:
z
Ã
0

nþ1;n
¼ z
Ã
0

n;nÀ1
þð1 À Þ"
n
Degree 1:
z
Ã
1

nþ1;n
¼ z
Ã
1

n;nÀ1
þð1 À Þ
2
"
n
z
Ã

0

nþ1;n
¼ z
Ã
0

n;nÀ1
þ z
Ã
1

nþ1;n
þð1 À 
2
Þ"
n
Degree 2:
z
Ã
2

nþ1;n
¼ z
Ã
2

n;nÀ1
þ
1

2
ð1 À Þ
3
"
n
z
Ã
1

nþ1;n
¼ z
Ã
1

n;nÀ1
þ2 z
Ã
2

nþ1;n
þ
3
2
ð1 À Þ
2
ð1 þ Þ"
n
z
Ã
0


nþ1;n
¼ z
Ã
0

n;nÀ1
þ z
Ã
1

nþ1;n
À z
Ã
2

nþ1;n
þ 1 À 
3

"
n
Degree 3:
z
Ã
3

nþ1;n
¼ z
Ã

3

n;nÀ1
þ
1
6
ð1 À Þ
4
"
n
z
Ã
2

nþ1;n
¼ z
Ã
2

n;nÀ1
þ3 z
Ã
3

nþ1;n
þð1 À Þ
3
ð1 þ Þ"
n
z

Ã
1

nþ1;n
¼ z
Ã
1

n;nÀ1
þ2 z
Ã
2

nþ1;n
À3 z
Ã
3

nþ1;n
þ
1
6
ð1 À Þ
2
ð11 þ 14 þ 11
2
Þ"
n
z
Ã

0

nþ1; n
¼ z
Ã
0

n;nÀ1
þ z
Ã
1

nþ1;n
À z
Ã
2

nþ1;n
þ z
Ã
3

nþ1;n
þ 1 À 
4

"
n
Degree 4:
z

Ã
4

nþ1;n
¼ z
Ã
4

n;nÀ1
þ
1
24
ð1 À Þ
5
"
n
z
Ã
3

nþ1;n
¼ z
Ã
3

n;nÀ1
þ4 z
Ã
4


nþ1;n
þ
5
12
ð1 À Þ
4
ð1 þ Þ"
n
z
Ã
2

nþ1;n
¼ z
Ã
2

n;nÀ1
þ3 z
Ã
3

nþ1;n
À6 z
Ã
4

nþ1;n
þ
5

24
ð1 À Þ
3
7 þ 10  þ 7
2
ðÞ"
n
z
Ã
1

nþ1; n
¼ z
Ã
1

n;nÀ1
þ2 z
Ã
2

nþ1;n
À3 z
Ã
3

nþ1;n
þ4 z
Ã
4


nþ1;n
þ
5
12
ð1 À Þ
2
5 þ 7 þ 7
2
þ 5
3
ðÞ
"
n
z
Ã
0

nþ1;n
¼ z
Ã
0

n;nÀ1
þ z
Ã
1

nþ1;n
À z

Ã
2

nþ1;n
þ z
Ã
3

nþ1;n
À z
Ã
4

nþ1;n
þ 1 À 
5

"
n
Source: From Morrison [5].
ORTHOGONAL LAGUERRE POLYNOMIAL APPROACH
243

×