Tải bản đầy đủ (.pdf) (19 trang)

Tài liệu Tracking and Kalman filtering made easy P10 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (128.73 KB, 19 trang )

10
VOLTAGE LEAST-SQUARES
ALGORITHMS REVISITED
10.1 COMPUTATION PROBLEMS
The least-squares estimates and minimum-variance estimates described in
Section 4.1 and 4.5 and Chapter 9 all require the inversion of one or more
matrices. Computing the inverse of a matrix can lead to computational
problems due to standard computer round-offs [5, pp. 314–320]. To illustrate
this assume that
s ¼ 1 þ " ð10:1-1Þ
Assume a six-decimal digit capability in the computer. Thus, if s ¼ 1:000008,
then the computer would round this off to 1.00000. If, on the other hand,
s ¼ 1:000015, then the computer would round this off to 1.00001. Hence,
although the change in " is large, a reduction of 33.3% for the second case (i.e.,
0.000005=0.000015), the change in s is small, 5 parts in 10
6
(i.e., 0.000005=
1.000015). This small error in s would seem to produce negligible effects on the
computations. However, in carrying out a matrix inversion, it can lead to serious
errors as indicated in the example to be given now. Assume the nearly singular
matrix [5]
A ¼
s 1
11

ð10:1-2Þ
where s ¼ 1 þ ". Inverting A algebraically gives
A
À1
¼
1


s À 1
1 À1
À1 s

ð10:1-3Þ
264
Tracking and Kalman Filtering Made Easy. Eli Brookner
Copyright # 1998 John Wiley & Sons, Inc.
ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)
If " ¼ 0:000015, then from (10.1-3) we obtain the following value for A
À1
without truncation errors:
A
À1
¼
1 À1
À11:000015

0:000015
¼ 10
4
6:66 À6:66
À6:66 6:66

ð10:1-4Þ
However, if " is truncated to 0.00001, then (10.1-3) yields
A
À1
¼
1 À1

À11:00001

0:00001
% 10
4
10 À10
À10 10

ð10:1-5Þ
Thus the 5 parts in 10
6
error in s results in a 50% error in each of the elements
of A
À1
.
Increasing the computation precision can help. This, however, can be costly
in computer hardware and /or computer time. There are, however, alternative
ways to cope with this problem. When doing a LSE problem this involves the
use of the voltage least-squares, also called square-root algorithms, which are
not as sensitive to computer round-off errors. This method was introduced in
Section 4.3 and will be described in greater detail in Section 10.2 and Chapters
11 to 14. Section 10.2.3 discusses a measure, called the condition number, for
determining the accuracy needed to invert a matrix.
The inverse of the matrices in (4.1-32) and (4.5-4) will be singular or nearly
singular when the time between measurements is very small, that is, when the
time between measurements T of (4.1-18) or (4.1-28) is small. Physically, if
range measurements are only being made and they are too close together, then
the velocity of the state vector X
Ã
n;n

cannot be accurately estimated. Mathe-
matically, the rows of T matrix become very dependent when the measurements
are too close together in time. When this happens, the matrices of the least-
squares and minimum-variance estimates tend to be singular. When the
columns of T are dependent, the matrix is said to not have full column rank. Full
column rank is required for estimating X
Ã
n;n
[5, Section 8.8]. The matrix T has
full column rank when its columns are independent. It does not have full rank if
one of its columns is equal to zero.
The examples of matrix T given by (4.1-18) for a constant-velocity target
and (4.1-28) for a constant-accelerating target show that the matrix T will
not have full rank when the time between measurements T is very small. When
the time between measurements T is small enough, the second column of
(4.1-18) becomes rounded off to zero, and the second and third columns of
COMPUTATION PROBLEMS
265
(4.1-28) likewise become rounded off to zero. Hence matrices T do not have full
rank when time T between measurements is very small.
This singularity situation is improved sometimes if in addition to measuring
range another parameter is measured, such as the Doppler velocity or the target
angular position. Consider a target moving along the x axis as shown in Figure
10.1-1. Assume the radar is located as indicated and that it is only making slant
range measurements of the target’s position. At the time when the target passes
through the origin, the tracker will have difficulty estimating the target’s
velocity and acceleration. This is because the target range only changes slightly
during this time so that the target behaves essentially like a stationary target
even though it could be moving rapidly. If, in addition, the radar measured the
target aspect angle , it would be able to provide good estimates of the velocity

and acceleration as it passed through the origin. In contrast, if the target were
being tracked far from the origin, way off to the right on the x axis in Figure
10.1-1, range only measurements would then provide a good estimate of the
target’s velocity and acceleration.
If the radar only measured target azimuth, then the radar measurements
would convey more information when the target passed through the origin than
when it was far from the origin. Thus it is desirable to make two essentially
independent parameter measurements on the target, with these being essentially
orthogonal to each other. Doing this would prevent the matrix inversion from
tending toward singularity, or equivalently, prevent T from not having full rank.
Methods are available to help minimize the sensitivity to the computer
round-off error problem discussed above. They are called square-root filtering
[79] or voltage-processing filtering. This type of technique was introduced in
Section 4.3. Specifically the Gram–Schmidt method was used to introduce this
type of technique. In this chapter we will first give further general details on the
technique followed by detailed discussions of the Givens, Householder, and
Gram-Schmidt methods in Chapters 11 to 13. For completeness, clarity,
convenience and in order that this chapter stand on its own, some of the results
Figure 10.1-1 Geometry for example
of target flying by radar. (From Morrison
[5, p. 319].)
266
VOLTAGE LEAST-SQUARES ALGORITHMS REVISITED
given in Section 4.3 will be repeated. However, it is highly recommended that if
Sections 4.2 and 4.3 are not fresh in the reader’s mind that he or she reread them
before reading the rest of this chapter.
10.2 ORTHOGONAL TRANSFORMATION OF
LEAST-SQUARES ESTIMATE ERROR
We proceed initially by applying an orthonormal transformation to eðX
Ã

n;n
Þ of
(4.1-31) [79]. Let F be an orthonormal transformation matrix. It then follows
from (4.3-9) to (4.3-11), and also from (4.3-16) and (4.3-17) and reference 79
(p. 57), that
F
T
F ¼ I ¼ FF
T
ð10:2-1Þ
and
F
À1
¼ F
T
ð10:2-2Þ
Also
kFYk¼kYkð10:2-3Þ
where kk is the Euclidean norm defined by (4.2-40) and repeated here
[79, 101]:
kyk¼ðy
T

1=2
ð10:2-4Þ
Thus eðX
Ã
n;n
Þ of (4.1-31) is the square of the Euclidean norm of
E ¼ TX

Ã
n;n
À Y
ðnÞ
ð10:2-5Þ
or
eðX
Ã
n;n
Þ¼kEk
2
¼ e
T
ð10:2-6Þ
where e
T
was first used in (1.2-33).
Applying an s  s orthonormal transformation F to E, it follows from
(4.3-21), and also reference 79 (p. 57), that
eðX
Ã
n;n
Þ¼kFEk
2
¼kFTX
Ã
n;n
À FY
ðnÞ
k

2
¼kðFTÞX
Ã
n;n
ÀðFY
ðnÞ
Þk
2
ð10:2-7Þ
ORTHOGONAL TRANSFORMATION OF LEAST-SQUARES ESTIMATE ERROR
267
Assume here that X
Ã
n;n
is an m
0
 1 matrix, that T is s  m
0
, and that Y
ðnÞ
is
s  1. As indicated in Section 4.3 [see, e.g., (4.3-31) and (4.3-59)] and to be
further indicated in the next section, F can be chosen so that the transformed
matrix T
0
¼ FT is given by
T
0
¼ FT ¼
U

0

|ffl{zffl}
m
0
gm
0
gs À m
0
ð10:2-8Þ
where U is an upper triangular matrix. For example U is of the form
U ¼
u
11
u
12
u
13
u
14
0 u
22
u
23
u
24
00u
33
u
34

000u
44
2
6
6
4
3
7
7
5
ð10:2-9Þ
for m
0
¼ 4. In turn
FTX
Ã
n;n
¼
UX
Ã
n;n
-------
0
2
4
3
5
gm
0
gs À m

0
ð10:2-10Þ
and
FY
ðnÞ
¼
Y
0
1
---
Y
0
2
2
4
3
5
o
m
0
o
s À m
0
ð10:2-11Þ
On substituting (10.2-10) and (10.2-11) into (10.2-7) for eðX
Ã
n;n
Þ,itisa
straightforward matter to show that
eðX

Ã
n;n
Þ¼eðUX
Ã
n;n
À Y
0
1
ÞþeðY
0
2
Þð10:2-12Þ
or equivalently
eðX
Ã
n;n
Þ¼kUX
Ã
n;n
À Y
0
1
k
2
þkY
0
2
k
2
ð10:2-13Þ

This was shown in Section 4.3 for the special case where s ¼ 3, m
0
¼ 2; see
(4.3-49). We shall now show that it is true for arbitrary s and m
0
. Equations
(10.2-12) and (10.3-13) follow directly from the fact that FTX
Ã
n;n
and FY
ðnÞ
are
column matrices so that FTX
Ã
n;n
À FY
ðnÞ
is a column matrix, E being given by
(10.2-5). Let the elements of FE be designate as "
0
1
¼ 1; 2; ...; s. Hence from
268
VOLTAGE LEAST-SQUARES ALGORITHMS REVISITED
(10.2-5), (10.2-10), and (10.2-11)
FE ¼ E
0
¼
"
0

1
"
0
2
.
.
.
"
0
m
0
------
"
0
m
0
þ1
.
.
.
"
0
s
2
6
6
6
6
6
6

6
6
6
6
6
6
4
3
7
7
7
7
7
7
7
7
7
7
7
7
5
¼
UX
Ã
n;n
À Y
0
1
--------------
ÀY

0
2
2
4
3
5
o
m
0
o
s À m
0
ð10:2-14Þ
From (10.2-3), (10.2-4), (10.2-6), and (10.2-14) it follows that
eðX
Ã
n;n
Þ¼kEk
2
¼kFEk¼ðFEÞ
T
ðFEÞ¼
X
m
0
i¼1
"
2
1
þ

X
s
i¼m
0
þ1
"
2
1
ð10:2-15Þ
which yields (10.2-12) and (10.2-13) for arbitrary s and m
0
, as we wished to
show.
The least-squares estimate X
Ã
n;n
now becomes the X
Ã
n;n
that minimizes
(10.2-13). Here, X
Ã
n;n
is not in the second term of the above equation so that this
term is independent of X
Ã
n;n
. Only the first term can be affected by varying X
Ã
n;n

.
The minimum eðX
Ã
n;n
Þ is achieved by making the first term equal to zero by
setting "
0
1
¼ "
0
m
0
¼ 0, as done in Section 4.3, to yield
UX
Ã
n;n
¼ Y
0
1
ð10:2-16Þ
The X
Ã
n;n
that satisfies (10.2-16) is the least-squares estimate being sought.
Because U is an upper triangular matrix, it is trivial to solve for X
Ã
n;n
using
(10.2-16). To illustrate, assume that U is given by (10.2-9) and that
X

Ã
n;n
¼
x
Ã
1
x
Ã
2
x
Ã
3
x
Ã
4
2
6
6
6
4
3
7
7
7
5
ð10:2-17Þ
and
Y
0
1

¼
y
0
1
y
0
2
y
0
3
y
0
4
2
6
6
4
3
7
7
5
ð10:2-18Þ
ORTHOGONAL TRANSFORMATION OF LEAST-SQUARES ESTIMATE ERROR
269
We start with the bottom equation of (10.2-16) to solve for x
Ã
4
first. This
equation is
u

44
x
Ã
4
¼ y
0
4
ð10:2-19Þ
and trivially
x
Ã
4
¼
y
0
4
u
44
ð10:2-20Þ
We next use the second equation from the bottom of (10.2-16), which is
u
33
x
Ã
3
þ u
34
x
Ã
4

¼ y
0
3
ð10:2-21Þ
Because x
Ã
4
is known, we can readily solve for the only unknown x
Ã
3
to yield
x
Ã
3
¼
y
0
3
À u
34
x
Ã
4
u
33
ð10:2-22Þ
In a similar manner the third equation from the bottom of (10.2-16) can be used
to solve for x
Ã
2

, and in turn the top equation then is used to solve for x
Ã
1
.
The above technique for solving (10.2-16) when U is an upper triangular
matrix is called the ‘‘back-substitution’’ method. This back-substitution method
avoids the need to solve (10.2-16) for X
Ã
n;n
using
X
Ã
n;n
¼ U
À1
Y
0
1
ð10:2-23Þ
with the need to compute the inverse of U. The transformation of T to the upper
triangular matrix T
0
followed by the use of the back-substitution method to
solve (10.2-16) for X
Ã
n;n
is called voltage least-squares filtering or square-root
processing. The use of voltage least-squares filtering is less sensitive to
computer round-off errors than is the technique using (4.1-30) with W given by
(4.1-32). (When an algorithm is less sensitive to round-off errors, it is said to be

more accurate [79, p. 68].) The above algorithm is also more stable, that is,
accumulated round-off errors will not cause it to diverge [79, p. 68].
In Section 4.3 we introduced the Gram–Schmidt method for performing the
orthonormal transformation F. In the three ensuing sections, we shall detail this
method and introduce two additional orthonormal transformations F that can
make T have the upper triangular form of (10.2-8).
Before proceeding, we shall develop further the physical significance to the
orthonormal transformation and the matrix U, something that we started in
Section 4.3. We shall also give some feel for why the square-root method is
more accurate, and then finally some additional physical feel for why and when
inaccuracies occur. First, let us revisit the a physical interpretation of the
orthonormal transformation.
270
VOLTAGE LEAST-SQUARES ALGORITHMS REVISITED

×