Tải bản đầy đủ (.pdf) (28 trang)

Tài liệu Tracking and Kalman filtering made easy P5 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (196.1 KB, 28 trang )

5
FIXED-MEMORY POLYNOMIAL
FILTER
5.1 INTRODUCTION
In Section 1.2.10 we presented the growing-memory g–h filter. For n fixed
this filter becomes a fixed-memory filter with the n most recent samples of
data being processed by the filter, sliding-window fashion. In this chapter
we derive a higher order form of this filter. We develop this higher order
fixed-memory polynomial filter by applying the least-squares results given by
(4.1-32). As in Section 1.2.10 we assume that only measurements of the target
range, designated as xðtÞ, are available, that is, the measurements are one-
dimensional, hence r ¼ 0 in (4.1-1a). The state vector is given by (4.1-2). We
first use a direct approach that involves representing xðtÞ by an arbitrary mth
polynomial and applying (4.1-32) [5, pp. 225–228]. This approach is given
in Section 5.2. This direct approach unfortunately requires a matrix
inversion. In Section 4.3 we developed the voltage-processing approach, which
did not require a matrix inversion. In Section 5.3 we present another approach
that does not require a matrix inversion. This approach also has the advantage
of leading to the development of a recursive form, to be given in Section 6.3,
for the growing-memory filter. The approach of Section 5.3 involves using the
discrete-time orthogonal Legendre polynomial (DOLP) representation for the
polynomial fit. As indicated, previously the approach using the Legendre
orthogonal polynomial representation is equivalent to the voltage-processing
approach. We shall prove this equivalence in Section 14.4. In so doing,
better insight into the Legendre orthogonal polynomial fit approach will be
obtained.
205
Tracking and Kalman Filtering Made Easy. Eli Brookner
Copyright # 1998 John Wiley & Sons, Inc.
ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)
5.2 DIRECT APPROACH (USING NONORTHOGONAL


mTH-DEGREE POLYNOMIAL FIT)
Assume a sequence of L þ 1 one-dimensional measurements given by
Y
ðnÞ
¼½y
n
; y
n1
; ...; y
nL

T
ð5:2-1Þ
with n being the last time a measurement was made. We assume that the
underlying process xðtÞ that generated these data can be approximated by a
polynomial of degree m as indicated by (4.1-44), which we rewrite here as
xðtÞ¼p

ðtÞ¼½p

ðtÞ
n
¼
X
m
j¼0
ða
j
Þ
n

t
j
ð5:2-2Þ
What we want is a least-squares estimate for the coefficients ð
a
j
Þ
n
of this
polynomial t. The subscript n on the coefficient ð
a
j
Þ
n
for the jth polynomial
term is used because the estimate of these coefficients will depend on n, the last
observation time at which a measurement was made. The subscript n on
½ p

ðtÞ
n
similarly is used to indicate that n is the last time a measurement was
made.
Let t ¼ rT. Then (5.2-2) becomes
p

¼ p

ðrTÞ¼½p


ðrÞ
n
¼
X
m
j¼0
ða
j
Þ
n
r
j
T
j
ð5:2-3Þ
or
p

¼½p

ðrÞ
n
¼
X
m
j¼0
ðz

j
Þ

n
r
j
ð5:2-4Þ
where
ðz

j
Þ
n
¼ða
j
Þ
n
T
j
ð5:2-4aÞ
where r becomes a new integer time index for the polynomial p

. Physically r
represents the measurement time index just as n does; it is just referenced to a
different starting time. The origin for r is the time at which the first
measurement Y
nL
is made for the fixed-memory filter; see Figure 5.2-1.
We want the above polynomial to provide a least-square fit to the measured
data. This can be achieved by applying the results of Section 4.1 directly. To do
this, we must find T of (4.1-32). We can choose for the state vector X
n
the

206
FIXED-MEMORY POLYNOMIAL FILTER
coefficients of the polynomial fit given by
X
n
 X
0
n
¼
ðz

0
Þ
n
ðz

1
Þ
n
.
.
.
ðz

m
Þ
n
2
6
6

6
6
4
3
7
7
7
7
5
ð5:2-5Þ
[Note that here X
n
is given by an ðm þ 1Þ-state matrix (as was the case for
(4.1-2) instead of m, as in (4.1-1b).] It then follows that the matrix T is given by
T  T
0
¼
L
0
L
1
L
2
... L
m
ðL  1Þ
0
ðL  1Þ
1
ðL  1Þ

2
... ðL  1Þ
m
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0
¼ 10
1
0
2
.
.
.
0
m
2

6
6
6
6
4
3
7
7
7
7
5
ð5:2-6Þ
(The prime is used on the matrices T and X
n
above because we shall shortly
develop an alternate, more standard form for the process state vector that uses
different expressions for T and X
n
:)
It is now a straightforward matter to substitute (5.2-5) and (5.2-6) into
(4.1-32) to obtain the least-squares estimate weight W

. Substituting this value
for the weight into (4.2-30) then yields the least-squares estimate X

n;n
in terms
of the coefficients ðz

i

Þ
n
of (5.2-5). Knowing these coefficients, we can use
(5.2-3) to estimate X

n;n
at time n by choosing r ¼ L. By choosing r ¼ L þ h,we
Figure 5.2-1 Polynominal fit p

¼½p

ðrÞ
n
to range measurements y
i
. (From
Morrison [5, p. 226].)
DIRECT APPROACH
207
obtain the prediction estimate X

nþh;n
. This approach has the disadvantage of
requiring a matrix inversion in evaluating ðT
T

1
of (4.1-32). Except for
when T
t

T is 2  2, this matrix inversion has to be done numerically on the
computer, an algebraic solution not being conveniently obtained [5, p. 228].
An approach is developed in the next section that uses an orthogonal
polynomial representation for the polynomial fit given by (5.2-3) and as a result
does not require a matrix inversion. This new approach also gives further insight
into the polynomial least-squares fit. This new approach, as we indicated, is the
same as the voltage-processing method described in Section 4.3, which also
does not require a matrix inversion. We shall prove this equivalence in Section
14.4.
Before proceeding we will relate the coefficients
a
j
and z

j
to D
j
xðtÞ. The
second coefficients of these parameters have been dropped for simplicity. By
differentiating the jth term of (5.2-2), we obtain
a
j
¼
1
j!
D
j
xðtÞ¼
1
j!

d
j
dt
j
xðtÞð5:2-7Þ
Hence
z

j
¼ a
j
T
j
¼
T
j
j!
D
j
xðtÞ¼
T
j
j!
d
j
dt
j
xðtÞð5:2-8Þ
The parameter z


j
is a constant times D
j
xðtÞ. Hence in the literature it is called
the scaled jth-state derivative [5]. We shall discuss it further shortly.
5.3 DISCRETE ORTHOGONAL LEGENDRE POLYNOMIAL
APPROACH
As indicated above, an approach is developed in this section that leads to a
simple analytical expression for the least-squares polynomial that does not
require a matrix inversion. It involves expressing the polynomial fit of (5.2-3) in
terms of the discrete orthonormal Legendre polynomials [5, pp. 228–235].
Specifically, the estimating polynomial is expressed, as done in (4.1-45), as
½ p

ðrÞ
n
¼
X
m
j¼0
ð
j
Þ
n

j
ðrÞð5:3-1Þ
where 
j
ðrÞ is the normalized discrete Legendre polynomial of degree j.

Specifically 
j
ðrÞ is a polynomial in r of degree j. It is given by

j
ðrÞ¼
1
c
j
p
j
ðrÞð5:3-1aÞ
208
FIXED-MEMORY POLYNOMIAL FILTER
where p
j
ðrÞ is the unnormalized discrete Legendre polynomial of degree j and
the 
i
ðrÞ and 
j
ðrÞ for r ¼ 0; ...; L are orthogonal for i 6¼ j, that is, they obey
the relation
X
L
r¼0

i
ðrÞ
j

ðrÞ¼
ij
ð5:3-2Þ
where 
ij
is the Kronecker delta function defined by

ij
¼
1 for i ¼ j
0 for i 6¼ j
&'
ð5:3-2aÞ
Because (5.3-2) equals 1 for i ¼ j, the 
i
ðrÞ and 
j
ðrÞ are called orthonormal. A
least-squares estimate for the coefficients ð
j
Þ
n
will be shortly determined from
which ½ p

ðrÞ
n
is determined.
In turn the discrete Legendre polynomial is given by
p

j
ðrÞ¼pðr; j; LÞð5:3-3Þ
with
pðr; j; LÞ¼
X
j
¼0
ð1Þ

j


j þ 


r
ðÞ
L
ðÞ
ð5:3-4Þ
where
x
ðmÞ
¼ xðx  1Þðx  2Þ ...ðx  m þ 1Þð5:3-4aÞ
The normalizing constant of (5.3-1a) is given by
c
j
¼ cð j; LÞð5:3-5Þ
where
½cð j; LÞ

2
¼
X
L
r¼0
½ pðr; j; LÞ
2
¼
ðL þ j þ 1Þ
ð jþ1Þ
ð2 j þ 1ÞL
ð jÞ
ð5:3-5aÞ
From (5.3-3), (5.3-1a), and (5.3-2) it follows that the discrete Legendre poly-
nomials satisfy the orthogonality condition
X
L
r¼0
pðr; i; LÞpðr; j; LÞ¼0 i 6¼ j ð5:3-6Þ
DISCRETE ORTHOGONAL LEGENDRE POLYNOMIAL APPROACH
209
Table 5.3-1 gives the first four discrete Legendre polynomials. Note that p
j
ðrÞ
and pðr; i; LÞ are used here to represent the Legendre polynomial, whereas
p; p

; pðtÞ, and pðrÞ are used to represent a general polynomial. The presence
of the subscript or the three variables in the argument of the Legendre
polynomial make it distinguishable from the general polynomials, such as

given by (4.1-44), (4.1-45), and (5.3-1).
The error given by (4.1-31) becomes
e
n
¼
X
L
r¼0
y
nLþr
½p

ðrÞ
n
ÈÉ
2
ð5:3-7Þ
Substituting (5.3-1) into (5.3-7), differentiating with respect to ð
i
Þ
n
, and
setting the result equal to 0 yield, after some straightforward manipulation [5],
X
L
k¼0
X
m
j¼0
ð

j
Þ
n

i
ðkÞ
j
ðkÞ¼
X
L
k¼0
y
nLþk

i
ðkÞi ¼ 0; 1; ...; m ð5:3-8Þ
where for convenience r is replaced by k. Changing the order of the summation
yields in turn
X
m
j¼0
ð
j
Þ
n
X
L
k¼0

i

ðkÞ
j
ðkÞ¼
X
L
k¼0
y
nLþk

i
ðkÞð5:3-9Þ
Using (5.3-2) yields the least-squares estimate for ð
i
Þ
n
given by
ð
j
Þ

n
¼
X
L
k¼0
y
nLþk

j
ðkÞ j ¼ 0; 1; ...; m ð5:3-10Þ

TABLE 5.3-1. First Four Discrete Orthogonal Legendre
Polynomials
pðx; 0; LÞ¼1
pðx; 1; LÞ¼1 2
x
L
pðx; 2; LÞ¼1 6
x
L
þ 6
xðx  1Þ
LðL  1Þ
pðx; 3; LÞ¼1 12
x
L
þ 30
xðx  1Þ
LðL  1Þ
 20
xðx  1Þðx 2Þ
LðL  1ÞðL 2Þ
pðx; 4; LÞ¼1 20
x
L
þ 90
xðx  1Þ
LðL  1Þ
 140
xðx  1Þðx  2Þ
LðL  1ÞðL  2Þ

þ 70
xðx  1Þðx  2Þðx 3Þ
LðL  1ÞðL 2ÞðL 3Þ
210
FIXED-MEMORY POLYNOMIAL FILTER
Substituting the above in (5.3-1) yields finally
½ p

ðrÞ
n
¼
X
m
j¼0
X
L
k¼0
y
nLþk

j
ðkÞ
"#

j
ðrÞð5:3-11Þ
The above final answer requires no matrix inversion for the least-squares
estimate solution. This results from the use of the orthogonal polynomial
representation. Equation (5.3-11) gives an explicit general functional expression
for the least-squares estimate polynomial fit directly in terms of the

measurements y
i
. This was not the case when using the direct approach of
Section 5.2, which did not involve the orthogonal polynomial representation.
There a matrix inversion was required. If the entries in this matrix are in
algebraic or functional form, then, as mentioned before, this matrix inversion
cannot be carried out algebraically easily except when the matrix to be inverted
is 2  2 [5, p. 228]. For large matrices the matrix entries need to be specific
numerical values with the inverse obtained on a computer; the inverse then is
not in algebraic or functional form.
The least-squares estimate polynomial solution given by (5.3-11) can be used
to estimate the process values at times in the future ðr > LÞ or past ðr < LÞ.
Estimates of the derivatives of the process can also be obtained by
differentiating (5.3-11) [5]. To do this, we let t ¼ rT, where T is the time
between the measurements y
i
. Then dt ¼ Tdr and
D ¼
d
dt
¼
1
T
d
dr
ð5:3-12Þ
Applying this to (5.3-11) yields the following expression for the least-squares
polynomial fit estimate p

and its derivatives [5, p. 231]

½D
i
p

ðrÞ
n
¼
1
T
i
X
m
j¼0
X
L
k¼0
y
nLþk

j
ðkÞ
"#
d
i
dr
i

j
ðrÞð5:3-13Þ
At time n þ 1 when the measurement y

nþ1
is received, the L þ 1 measurements
Y
ðnÞ
of (5.2-1) can be replaced by the L þ 1 measurements of
Y
ðnþ1Þ
¼ðy
nþ1
; y
n
; ...; y
nLþ1
Þ
T
ð5:3-14Þ
Anewfixed-memory polynomial filter least-squares estimate is now obtained.
Equation (5.3-11) is again used to obtain the estimating polynomial ½ p

ðiÞ
nþ1
.
This new polynomial estimate is based on the latest L þ 1 measurements. But
now time has moved one interval T forward, and the measurements used are
those made at times n  L þ 1ton þ 1 to give the new coefficient estimates
ð
j
Þ

nþ1

in (5.3-10) for the time-shifted data. This process is then repeated at
time n þ 2 when Y
nþ2
is obtained, and so on.
DISCRETE ORTHOGONAL LEGENDRE POLYNOMIAL APPROACH
211
5.4 REPRESENTATION OF POLYNOMIAL FIT IN TERMS OF
ITS DERIVATIVES (STATE VARIABLE REPRESENTATION OF
POLYNOMIAL FIT IN TERMS OF PROCESS DERIVATIVES)
We begin by developing in this section a very useful alternate representation for
the polynomial function of time. This representation lets us obtain the
transformation matrix for the process that provides an alternate way to obtain
the process state variable estimate at other times. Instead of expressing the
polynomial process in terms of the nth-degree polynomial ½ p

ðrÞ
n
,itis
possible to express the process in terms of its first m derivatives at any time,
as shall now be shown.
For a process given by an mth-degree polynomial, its state vector at any time
n can be expressed in terms of its first m derivatives by
Xðt
n
Þ¼X
n
¼
x
Dx
.

.
.
D
m
x
2
6
6
4
3
7
7
5
¼
x
n
Dx
n
.
.
.
D
m
x
n
2
6
6
6
4

3
7
7
7
5
ð5:4-1Þ
where D is defined by (5.3-12). Let
y
n
¼ x
n
þ 
n
ð5:4-2Þ
If we assume that only x
n
is measured and not its derivatives, then
Y
n
¼ MX
n
þ N
n
ð5:4-3Þ
where because range is the only measurement
M ¼½100 ... 0ð5:4-3aÞ
and Y
n
and N
n

are 1  1 matrices given by
Y
n
¼½y
n
 and N
n
¼½
n
ð5:4-3bÞ
For definiteness and convenience in the ensuing discussion let us assume that
the polynomial fit p

is of degree 2, that is, m ¼ 2 and is given by
x
n
¼ða
0
Þ
n
þða
1
Þ
n
rT þð1=2!Þða
2
Þ
n
r
2

T
2
ð5:4-4Þ
where we have now shifted the origin of r so that r ¼ 0 at time n, the time at
which the estimate X

n;n
is to be obtained. Then
Dx
n
¼
1
T
ð
a
1
T þ a
2
rT
2
Þ¼a
1
for r ¼ 0 ð5:4-5Þ
D
2
x
n
¼ a
2
ð5:4-6Þ

212
FIXED-MEMORY POLYNOMIAL FILTER
where for simplicity the subscript n on a
i
has been dropped. It is next easy to
show for m ¼ 2 that the transition matrix È that goes from the state X
n
to X
nþ1
is given by
È ¼
1 T
1
2
T
2
01 T
00 1
2
6
6
4
3
7
7
5
ð5:4-7Þ
The reader can verify this by substituting (5.4-4) to (5.4-6) into (5.4-1) and
multiplying by (5.4-7). The transition matrix that goes from X
n

to X
nþh
is È
h
=
È
h
.
For the case where measurements are available at L þ 1 times as given by
(5.2-1),
Y
ðnÞ
¼
MX
n
-------
MX
n1
-------
.
.
.
-------
MX
nL
2
6
6
6
6

6
6
6
6
4
3
7
7
7
7
7
7
7
7
5
þ

n
------

n1
------
.
.
.
------

nL
2
6

6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
5
ð5:4-8Þ
which in turn can be written as
Y
ðnÞ
¼
MX
n
-------

1
X
n

-----------
.
.
.
----------

L
X
n
2
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
5
þ


n
-----

n1
------
.
.
.
------

nL
2
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7

7
5
ð5:4-9Þ
or
Y
ðnÞ
¼
M
----

1
--------
.
.
.
--------

L
2
6
6
6
6
6
6
6
6
4
3
7

7
7
7
7
7
7
7
5
X
n
þ

n
-----

n1
-----
.
.
.
-----

nL
2
6
6
6
6
6
6

6
6
4
3
7
7
7
7
7
7
7
7
5
ð5:4-10Þ
REPRESENTATION OF POLYNOMIAL FIT IN TERMS OF ITS DERIVATIVES
213
or
Y
ðnÞ
¼ TX
n
þ N
ðnÞ
ð5:4-11Þ
where
T ¼
M
----

1

-------
.
.
.
-------

L
2
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
5
N
ðnÞ
¼


n
-----

n1
-----
.
.
.
-----

nL
2
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7

7
5
ð5:4-11aÞ
Note that the above results [(5.4-8) to (5.4-11a) with (5.2-1)] are a special case
of (4.1-5) to (4.1-11b). Here the Y
i
and N
i
are 1  1 column matrices [see
(5.4-3b)] instead of ðr þ 1Þ1 column matrices [as in (4.1-1a) and (4.1-1c)].
Sometimes in the literature, instead of representing the state vector X
n
by its
derivatives, as done in (5.4-1) for a process represented by an mth-degree
polynomial, the scaled derivatives z
j
of (5.2-8) are used to form the scaled state
vector Z
n
given by
Z
n
¼
z
0
z
1
z
2
.

.
.
z
m
2
6
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
7
5
¼
z
_z
z

.
.
.
D
m
z
2
6
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
7
5
¼
x

TDx
T
2
2!
D
2
x
.
.
.
T
m
m!
D
m
x
0
B
B
B
B
B
B
B
B
B
@
1
C
C

C
C
C
C
C
C
C
A
n
ð5:4-12Þ
where, as before T is the time between the measurements. The transition matrix
for the scaled state vector Z
n
when m ¼ 2 is given by
È
z
¼
111
012
001
2
4
3
5
ð5:4-13Þ
where the subscript z is used here to emphasize that È is the transition matrix
for the scaled state vector z
n
.
5.5 REPRESENTATION OF LEAST-SQUARES ESTIMATE IN

TERMS OF DERIVATIVE STATE VECTOR
The least-squares estimate of the process state vector given by (5.4-1) can be
written at time r ¼ L þ h as [with now the origin of r again at n  L as given in
214
FIXED-MEMORY POLYNOMIAL FILTER
Figure (5.2-1)]
X

nþh;n
¼
p

ðL þ hÞ
Dp

ðL þ hÞ
.
.
.
D
m
p

ðL þ hÞ
2
6
6
6
4
3

7
7
7
5
n
ð5:5-1Þ
For simplicity the subscript n has been dropped from p

ðrÞ. Sometimes we
shall drop the r variable as well, just leaving p

.
In terms of the scaled state vector one obtains
Z

nþh;n
¼
p

ðL þ hÞ
TDp

ðL þ hÞ
T
2
2!
D
2
p


ðL þ hÞ
.
.
.
T
m
m!
D
m
p

ðL þ hÞ
2
6
6
6
6
6
6
6
6
6
4
3
7
7
7
7
7
7

7
7
7
5
n
ð5:5-2Þ
Using the orthogonal polynomial least-squares solution given by (5.3-11) and
(5.3-13), it can be shown that the least-squares solution for Z
nþh
is [5, p. 237]
Z

nþh;n
¼ WðhÞ
z
Y
ðnÞ
ð5:5-3Þ
TABLE 5.5-1. Elements of Associate Stirling Matrix S of First Kind for
i , j ¼ 0, ...,10
ij0123 45 6 7 8 9 10
0100000 0 0 0 0 0
1 0 1 1 2 6 24 120 720 5,040 40,320 362,880
2 0 0 1 3 11 50 274 1,764 13,068 109,584 1,026,576
3 0 0 0 1 6 35 225 1,624 13,132 118,124 1,172,700
4 0 0 0 0 1 10 85 735 6,769 67,284 723,680
5 0 0 0 0 0 1 15 175 1,960 22,449 269,325
6 0 0 0 0 0 0 1 21 322 4,536 63,273
7 0 0 0 0 0 0 0 1 28 546 9,450
8 0 0 0 0 0 0 0 0 1 36 870

9000000 0 0 0 1 45
10 0000 00 0 0 0 0 1
Note: Used to obtain weights of optimum least-squares fixed-memory filter.
Source: From Morrison [5, p. 85].
REPRESENTATION OF LEAST-SQUARES ESTIMATE
215

×