www.pdfgrip.com
An Introduction to Stochastic Filtering Theory
www.pdfgrip.com
OXFORD GRADUATE TEXTS IN MATHEMATICS
Books in the series
1. Keith Hannabuss: An introduction to quantum theory
2. Reinhold Meise and Dietmar Vogt: Introduction to functional
analysis
3. James G. Oxley: Matroid theory
4. N.J. Hitchin, G.B. Segal, and R.S. Ward: Integrable systems:
twistors, loop groups, and Riemann surfaces
5. Wulf Rossmann: Lie groups: An introduction through linear
groups
6. Qing Liu: Algebraic geometry and arithmetic curves
7. Martin R. Bridson and Simon M. Salamon (eds): Invitations to
geometry and topology
8. Shmuel Kantorovitz: Introduction to modern analysis
9. Terry Lawson: Topology: A geometric approach
10. Meinolf Geck: An introduction to algebraic geometry and
algebraic groups
11. Alastair Fletcher and Vladimir Markovic: Quasiconformal maps
and Teichmüller theory
12. Dominic Joyce: Riemannian holonomy groups and calibrated
geometry
13. Fernando Villegas: Experimental Number Theory
14. Péter Medvegyev: Stochastic Integration Theory
15. Martin Guest: From Quantum Cohomology to Integrable Systems
16. Alan Rendall: Partial Differential Equations in General Relativity
17. Yves Félix, John Oprea and Daniel Tanré: Algebraic Models in
Geometry
18. Jie Xiong: An Introduction to Stochastic Filtering Theory
www.pdfgrip.com
An Introduction to
Stochastic Filtering
Theory
Jie Xiong
Department of Mathematics
University of Tennessee
Knoxville, TN 37996-1300, USA
1
www.pdfgrip.com
3
Great Clarendon Street, Oxford OX2 6DP
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
© Jie Xiong 2008
The moral rights of the author have been asserted
Database right Oxford University Press (maker)
First published 2008
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Data available
Typeset by Newgen Imaging Systems (P) Ltd., Chennai, India
Printed in Great Britain
on acid-free paper by
Biddles Ltd., King’s Lynn, Norfolk
ISBN 978–0–19–921970–4
10 9 8 7 6 5 4 3 2 1
www.pdfgrip.com
To Jingli, Jerry and Michael
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Preface
The object of stochastic filtering is to use probability tools to estimate
unobservable stochastic processes that arise in many applied fields including
communication, target tracking, and mathematical finance.
Stochastic filtering theory has seen a rapid development in recent years.
First, the (branching) particle-system representation of the optimal filter
has been studied by many authors to seek more effective numerical approximations of the optimal filter. It turns out that such a representation can
be utilized to prove the uniqueness of the solution to the filtering equation
itself and, hence, broadening the scope of the tractable class of models.
Secondly, the stability of the filter with “incorrect” initial state as well
as the long-time behavior of the optimal filter has attracted the attention of many researchers. This direction of research has become extremely
challenging after a gap in a widely cited paper was discovered.
Finally, many problems in mathematical finance, for example, the
stochastic volatility model, lead to singular filtering models. More specifically, the magnitude of the observation noise may depend on the signal that
makes the optimal filter singular. Some progress in this aspect was made
recently.
It is the belief of this author that the time is ripe for a new textbook
to reflect these recent developments. The main theme of this book is to
recapitulate these advances in a succinct and efficient manner. The book can
serve as a text for mathematics as well as engineering graduate and inspired
undergraduate students. It can also serve as a reference for practitioners in
various fields of applications. As noted, the aim of this book is to take the
students to this exciting field of research through the shortest route possible.
To achieve this goal, we completely avoid the chaos decomposition used in
the classical filtering theory (e.g. Kallianpur [81]).
The main approach of this book is based on the particle representation
for stochastic partial differential equations developed by Kurtz and Xiong
([97–99]). The methods used here can be applied to more general stochastic
partial differential equations. Therefore, it provides a bridge to the readers who are interested in studying the general theory of stochastic partial
differential equations.
We should mention that the propagation of chaos decomposition and
the multiple stochastic integral methods have provided another type of
www.pdfgrip.com
viii
Preface
numerical scheme in the approximation of the optimal filter. The advantage of this approach is that most of the computations are done “offline” in
advance. This important development is not covered in this book because
we want to limit the pre-requisite of this book. We only assume that the
reader has basic knowledge of probability theory, for example, the material
in the book of Billingsley [11].
www.pdfgrip.com
Acknowledgements
The author hopes that the readers find this book enjoyable and informative
as they seek to enter the research field of stochastic filtering. He had much
assistance in making the book a reality and wishes to thank all of those who
helped along the way. In particular, he would like to note that his friend and
colleague, Balram Rajput at the University of Tennessee has read the entire
manuscript and has helped the author to correct many grammatical and presentational problems. Tom Kurtz from the University of Wisconsin, Wei Sun
from Concordia University at Montreal, Ofer Zeitouni from the University
of Minnesota and Yong Zeng from the University of Missouri at Kansas City
have read the manuscript and made many constructive suggestions. Don
Dawson from Carleton University and Leonid Mytnik from Israel Institute
of Technology have discussed with this author and made very important
observations that help to improve this book substantially. Zhiqiang Li, his
graduate student, has read the entire manuscript and asked questions that
helped him to clarify many points from a students’s viewpoint.
This book is based on a course taught by the author in the Fall semester of
2005 at the University of Tennessee. After the book was almost finished, the
author also presented it in a short course at the Summer School in Beijing
Normal University in 2007. The author wishes to thank the audience in
both classes who raised many interesting questions. He also wishes to thank
Zenghu Li for the invitation to visit Beijing Normal University and to give
the noted course in the Summer School there.
As much of the material is based on the author’s collaborative research
in this field, he would like to thank his collaborators on stochastic filtering for this enjoyable co-operation and for allowing him to include
the joint research in this book. These collaborators include Dan Crisan
(Imperial College in London), Mike Kouritzin (the University of Alberta at
Edmonton), Tom Kurtz (the University of Wisconsin at Madison), Wei Sun
(Concordia University at Montreal), and Xunyu Zhou (the Chinese University of Hong Kong). He also would like to thank the National Security
Agency for the support of his research during the last five years.
Finally, the author would like to thank the staff of the Oxford University
Press: Editor Alison Jones and her assistant Dewi Jackson, as well as her
formal assistant Jessica Churchman, for their co-operation and help.
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
Contents
1
2
3
4
Introduction
1
1.1
1.2
1.3
1
6
8
Examples
Basic definitions and the filtering equation
An overview
Brownian motion and martingales
15
2.1
2.2
2.3
2.4
15
25
32
34
Martingales
Doob–Meyer decomposition
Meyer’s processes
Brownian motion
Stochastic integrals and Itô’s formula
36
3.1
3.2
3.3
3.4
3.5
3.6
36
37
41
46
52
57
Predictable processes
Stochastic integral
Itô’s formula
Martingale representation in terms of Brownian motion
Change of measures
Stratonovich integral
Stochastic differential equations
61
4.1
4.2
4.3
4.4
4.5
62
67
70
72
79
Basic definitions
Existence and uniqueness of a solution
Martingale problem
A stochastic flow
Markov property
www.pdfgrip.com
xii
Contents
5
6
Filtering model and Kallianpur–Striebel formula
82
5.1
5.2
5.3
5.4
5.5
82
83
86
93
95
Uniqueness of the solution for Zakai’s equation
6.1
6.2
6.3
6.4
6.5
6.6
7
8
9
The filtering model
The optimal filter
Filtering equation
Particle-system representation
Notes
Hilbert space
Transformation to a Hilbert space
Some useful inequalities
Uniqueness for Zakai’s equation
A duality representation
Notes
96
96
98
103
109
111
120
Uniqueness of the solution for the filtering equation
121
7.1
7.2
7.3
7.4
121
124
129
131
An interacting particle system
The uniqueness of the system
Uniqueness for the filtering equation
Notes
Numerical methods
132
8.1
8.2
8.3
8.4
8.5
132
137
143
149
155
Monte-Carlo method
A branching particle system
Convergence of Vtn
Convergence of V n
Notes
Linear filtering
157
9.1
9.2
9.3
157
160
9.4
Gaussian system
Kalman–Bucy filtering
Discrete-time approximation of the
Kalman–Bucy filtering
Some basic facts for a related deterministic
control problem
164
165
www.pdfgrip.com
Contents
9.5
9.6
Stability for Kalman–Bucy filtering
Notes
10 Stability of non-linear filtering
10.1
10.2
10.3
10.4
Markov property of the optimal filter
Ergodicity of the optimal filter
Finite memory property
Asymptotic stability for non-linear filtering with
compact state space
10.5 Exchangeability of union intersection for σ -fields
10.6 Notes
11 Singular filtering
11.1
11.2
11.3
11.4
11.5
11.6
A special example
A general singular filtering model
Optimal filter with discrete support
Optimal filter supported on manifolds
Filtering model with Ornstein–Uhlenbeck noise
Notes
180
185
186
187
198
205
211
223
230
231
231
236
240
245
252
254
Bibliography
255
List of Notations
266
Index
269
xiii
www.pdfgrip.com
This page intentionally left blank
www.pdfgrip.com
1
Introduction
In this chapter, we first give a few motivating examples for stochastic
filtering. Then, we introduce some basic definitions and state the main
equations that arise in non-linear filtering theory. Finally, we give an
overview of the topics to be covered in this book.
1.1
Examples
In this section, we study four examples arising from different fields of application. The first example comes from wireless communication that was in
fact the main motivation for filtering theory in the early stage. The second example comes from mathematical finance where the random factors
affecting the stock prices are not completely observed, instead, only the
stock prices themselves are observed. The selection of a portfolio must be
based on the available information provided by the movement of the stock
prices. The third example comes from the field of environment protection.
In this example, we estimate the distribution of the undesired chemicals
in a river using the data obtained from a few observation stations along
the river. Finally, in the last example, we study the filtering problem when
the observation noise is given by an Ornstein–Uhlenbeck process that is an
approximation of the white noise that exists only in the sense of generalized
functions.
1.1.1
Wireless communication
A signal process Xt taking values in a space S is to be transmitted to a
receiver. Because of the random noise, this signal is not directly observable. Instead, a function h(Xt ) (taking values in Rm ) of this signal plus
an m-dimensional white noise nt is observed. The original observation
www.pdfgrip.com
2
1 : Introduction
model is then
yt = h(Xt ) + nt ,
(1.1)
where yt is called the observation process.
Note that the white noise exists only in the sense of generalized functions, it is the derivative (again, in the sense of generalized functions) of a
Brownian motion that exists in the ordinary sense (we refer the interested
reader to the book of Kuo [96] for an introduction to the white noise theory). Therefore, it is natural for us to consider the accumulated observation
process
Yt =
t
0
ys ds
as the source of our information. The observation equation is then written as
Yt =
t
0
h(Xs )ds + Wt ,
(1.2)
where Wt is an m-dimensional Brownian motion.
The aim of the filtering theory is to estimate the signal based on the
observation σ -field
Gt ≡ σ (Ys : 0 ≤ s ≤ t)
generated by the (accumulated) observation process {Ys : s ≤ t}.
1.1.2
Portfolio optimization
We consider a market consisting of a bond and d stocks whose prices are
stochastic processes Sti , i = 0, 1, . . . , d, governed by the following stochastic
differential equations (SDEs):
dSti = Sti Xti dt +
dSt0
=
St0 Xt0 dt,
ij ˜ j
m
˜ t dW
t
j=1 σ
,
i = 1, 2, . . . , d,
t ≥ 0,
(1.3)
˜ m )∗ is a standard Brownian motion defined on
˜ := (W
˜ 1, . . . , W
where W
a stochastic basis ( , F , P; {Ft }t≥0 ) satisfying the usual condition, Xti ,
i = 1, 2, . . . , d, are the appreciation rate processes of the stocks, Xt0 is
ij
the interest rate process, and the d × m matrix valued process ˜ t := (σ˜ t ) is
∗
the volatility process. Here and throughout this book A denotes the
transpose of a matrix A.
Let
Gt := σ (Ssi : s ≤ t, i = 0, 1, 2, . . . , d), t ≥ 0.
www.pdfgrip.com
1.1
Examples
˜
˜ is the only
In our model Gt , rather than FtW (the filtration generated by W),
information available to the investors at time t.
One of the objectives of mathematical finance is to study how to choose
a suitable portfolio such that the terminal wealth is optimized. Let uit be the
worth of an agent’s wealth (dollar amount) in the ith stock, i = 1, 2, . . . , d.
Our decision must be based on the available information. Let L2G (0, T; Rd )
be the collection of square-integrable processes that are predictable with
respect to the σ -fields Gt .
Definition 1.1 A d-dimensional process ut ≡ (u1t , . . . , udt )∗ is an admissible
portfolio if ut ∈ L2G (0, T; Rd ).
For the portfolio to be self-financed, the change in the wealth should be
equal to the value change due to that of the stocks and the bond. Let Wt be
the wealth process. Then
⎛
⎞
d
d
0
i
i ⎠ dSt
i dSt
⎝
d Wt = Wt −
ut
+
u
t
St0
Sti
i=1
i=1
⎞
⎛
= ⎝Xt0 Wt +
d
(Xti − Xt0 )uit ⎠ dt +
i=1
d
m
˜ t.
σ˜ t uit d W
ij
j
(1.4)
i=1 j=1
Applying Itô’s formula to equation (1.3), we have
d log Sti = Xti −
1 ii
a dt +
2 t
m
ij ˜ j
σ˜ t d W
t , i = 1, 2, . . . , d,
(1.5)
j=1
where
ij
at
m
jk
σ˜ tik σ˜ t , i, j = 1, 2, . . . , d.
:=
k=1
It is easy to show that the quadratic covariation process, which coinj
cides with Meyer’s process in this case, between log Sti and log St is given
ij
t ij
by 0 as ds. Therefore, the matrix-valued process At ≡ (at ) is Gt -adapted.
ij
Let t ≡ (σt ) be the square root of At . We will prove in Chapter 11
ij
that σt is Gt -adapted, i.e. it is completely observable. As we shall see in
equation (1.8) below the stock price Sti satisfies an equivalent stochastic
ij
ij
differential equation that depends on (σt ) instead of (σ˜ t ). Moreover,
d
log St0 is also Gt -adapted.
Xt0 = dt
3
www.pdfgrip.com
4
1 : Introduction
However, the stochastic process Xt := (Xt1 , . . . , Xtd )∗ is not necessarily
Gt -adapted and hence, its value is not available to the investors. We need
to estimate Xt based on the available information Gt .
From equation (1.5), we see that
t
log Sti − log S0i −
Xsi −
0
1 ii
a ds =
2 s
m
t
j=1
0
˜ s,
σ˜ s d W
ij
t
j
i = 1, 2, . . . , d
t
are martingales with Meyer’s process 0 As ds = 0 s2 ds. By the martingale
representation theorem, there exists a d-dimensional standard Brownian
motion W ≡ (W 1 , . . . , W d ) on ( , F , P) such that
m
d
˜t =
σ˜ t d W
ij
j
j=1
ij
j
σt dWt ,
i = 1, . . . , d.
(1.6)
j=1
Thus,
d log Sti
=
Xti
1
− aiit dt +
2
d
ij
j
σt dWt ,
i = 1, . . . , d.
(1.7)
j=1
Equivalently, the stock prices satisfy the following modified stochastic
differential equations:
⎛
⎞
dSti = Sti ⎝Xti dt +
d
σt dWt ⎠ ,
ij
j
i = 1, . . . , d.
(1.8)
j=1
We assume that
t
is invertible. Let S˜ t be defined by
d S˜ t :=
−1
t d log St .
We can write the observation equation (1.7) as
S˜ t = S˜ 0 +
t
0
−1
s
Xs −
1˜
As ds + Wt ,
2
(1.9)
where
˜ s = a11 , . . . , add
A
s
s
∗
.
˜
If t is non-random, then FtS = Gt . Let Yt = S˜ t − S˜ 0 . The observation
model can be written as
Yt =
t
0
hs (Xs )ds + Wt ,
www.pdfgrip.com
1.1
Examples
where
hs (x) =
1.1.3
−1
s
x−
1˜
As .
2
Environment pollution
Suppose that there is a source of pollution at a location θ at which undesired
chemicals are dumped into the river [0, ]. We assume that the dumping
times follow a Poisson process with parameter λ and the amounts are independent R+ -valued random variables ξ1 , ξ2 , . . ., with the same distribution.
Denote the dumping times by τ1 < τ2 < · · · . Then, the chemical distribution Xt in the river at time t is an MF ([0, ])-valued stochastic process,
where MF ([0, ]) denotes the collection of finite measures on [0, ].
For τj ≤ t < τj+1 , the process Xt satisfies the following partial differential
equation
d
Xt , f = Xt , Lf ,
dt
where µ, f represents the integral of a function f with respect to a
measure µ,
Lf (x) = Df (x) + Vf (x) − αf (x),
D is the dispersion coefficient, and V is the river velocity and α is the leakage
rate. At time t = τj , there is a random jump for X given by
Xt − Xt− = ξj δθ ,
where δθ is the Dirac measure at θ.
Suppose that m observation stations x1 , . . . , xm are set up along the river.
The chemical concentrations near these stations are observed subject to the
random error:
Yti =
t
0
1
Xs ([xi − , xi + ])ds + Wti ,
2
i = 1, 2, . . . , m.
Let Gt = σ (Ys : 0 ≤ s ≤ t). Then Gt is the information available and
we need to estimate Xt based on Gt . It is also desirable to estimate the
parameters θ and λ.
1.1.4
Filtering with OU process as noise
As we indicated in Section 1.1.1, white noise does not exist in the ordinary
sense. We will demonstrate below that the Ornstein–Uhlenbeck process
provides a natural approximation of white noise.
5
www.pdfgrip.com
6
1 : Introduction
β
Let β > 0 and consider the process Ot governed by the following SDE:
β
β
dOt = −βOt dt + βdWt ,
β
where Wt is an m-dimensional Brownian motion. Ot is called the Ornstein–
Uhlenbeck process with parameter β.
Applying Itô’s formula (to be given in Chapter 3), we get
β
d eβt Ot
= βeβt dWt ,
and hence,
t
β
Ot = O0 e−βt + βe−βt
0
eβs dWs .
It follows from Theorem 3.6 that for t ≥ s ≥ 0,
β
β
Cov(Ot , Os ) =
β −β(t−s)
− e−β(t+s) →
e
2
∞ if t = s,
0 if t = s.
β
Thus, Ot approximates white noise as β → ∞. More precisely, we can
prove that its integral converges to the Brownian motion Wt . In fact, as
t
0
β
Or dr = Wt −
1 β
O ,
β t
it is easy to see that
t
0
β
Or dr → Wt ,
as β → ∞.
For simplicity of notations, we take β = 1 and denote O1t by Ot . We will
consider the filtering problem with the following observation model:
yt = h(Xt ) + Ot .
(1.10)
Since the law of y is not absolutely continuous with respect to the law
of the OU-process O, the filtering problem with equation (1.10) as the
observation model is singular. We will study a general singular filtering
model with equation (1.10) as a special case in Chapter 11.
1.2
Basic definitions and the filtering equation
As we have seen from the examples in the previous section, the filtering problem consists of two processes: The signal process, which is what we want
www.pdfgrip.com
1.2
Basic definitions and the filtering equation
to estimate, and the observation process, which provides the information
we can use.
In this book, we will model the signal by a d-dimensional diffusion
process Xt governed by the following stochastic differential equation:
dXt = b(Xt )dt + c(Xt )dWt + σ (Xt )dBt ,
(1.11)
where W and B are two independent Brownian motions taking values in
Rm and Rd , respectively. The mappings b : Rd → Rd , c : Rd → Rd×m
and σ : Rd → Rd×d are continuous.
The observation process is an m-dimensional process satisfying the
following stochastic differential equation:
Yt =
t
0
h(Xs )ds + Wt ,
(1.12)
where h : Rd → Rm is a continuous mapping. Let
Gt = σ (Ys : 0 ≤ s ≤ t)
be the information available to us.
Note that such a setup will not cover the model in the pollution example
that needs an infinite-dimensional state space. Its solution is beyond the
scope of this book.
As we will demonstrate in Chapter 5, the non-linear optimal filter is
a P (Rd )-valued process πt that is the conditional distribution of Xt given
Gt , where P (Rd ) denotes the collection of all Borel probability measures
on Rd .
The key in the development of non-linear filtering theory is the
Kallianpur–Striebel formula that represents the optimal filter πt according
to an unnormalized filter Vt :
πt , f =
Vt , f
,
Vt , 1
∀ f ∈ Cb2 (Rd ),
(1.13)
where
ˆ (Mt f (Xt )|Gt ),
Vt , f = E
ˆ refers to the expectation with respect to a probability measure P,
ˆ
and E
which is equivalent to P such that
dP
d Pˆ
Ft
= Mt .
The process Vt takes values in MF (Rd ), the space of finite Borel measures
on Rd .
7
www.pdfgrip.com
8
1 : Introduction
The main advantage of using Pˆ is that Y becomes a Brownian motion that
is independent of B, and Xt is governed by a stochastic differential equation
driven by B and Y. Based on this fact, a stochastic differential equation on
MF (Rd ) is derived:
t
Vt , f = V0 , f +
Vs , Lf ds +
0
t
0
Vs , ∇ ∗ fc + fh∗ dYs ,
(1.14)
where
Lf = ∇ ∗ fb +
=
1
2
1
tr c∗ ∂ 2 fc + σ ∗ ∂ 2 f σ
2
d
d
aij ∂ij2 f +
i,j=1
bi ∂i f ,
i=1
and a = cc∗ + σ σ ∗ . The equation above is called Zakai’s equation for
the unnormalized filter. Applying Itô’s formula to the Kallianpur–Striebel
formula and Zakai’s equation, we can obtain the filtering equation for πt :
πt , f = π0 , f +
+
t
0
t
0
πs , Lf ds
(1.15)
πs , ∇ ∗ fc + fh∗ − πs , f πs , h∗ dνs ,
where
νt = Yt −
t
0
πs , h ds
is a Brownian motion with respect to the original probability measure P.
The process νt is called the innovation process and the filtering equation
is called the Kushner–Stratonovich equation, or the FKK equation (which
stands for Fujisaki–Kallianpur–Kunita).
1.3
An overview
In this section, we will give an outline of the results that will be studied in
this book. In Chapters 2–4, we will introduce the basic material of stochastic
analysis that will be used in the rest of this book. We refer the reader who is
interested in a more detailed treatment of this topic to the following books
(Ikeda and Watanabe [76], Revuz and Yor [135], Protter [134]). Then, in
Chapter 5, we derive the Kallianpur–Striebel formula as well as the filtering
equation (1.15) and Zakai’s equation (1.14). Now we sketch the results of
Chapters 6–11.
www.pdfgrip.com
1.3
An overview
In Chapter 6, we study Zakai’s equation (1.14) as a linear stochastic partial differential equation (SPDE). We make a linear transformation from
MF (Rd ) to a Hilbert-space H0 such that the equation (1.14) is transformed
to an equation on H0 . We then use Hilbert-space techniques to derive
various estimates for equations on H0 . As a consequence, we prove that
equation (1.14) has a unique solution.
In Chapter 7, we study the filtering equation (1.15) as a non-linear SPDE.
To get the uniqueness of the solution, we consider a particle system
⎧
t
t ˜
t
i
i
i
i
i
i
⎪
⎨ Xt = X0 + 0 σ (Xs )dBs + 0 b(X
s , µs )ds + 0 c(Xs )dνs ,
t i ∗
i
i
i
(1.16)
At = A0 + 0 As β (Xs , µs )dνs ,
⎪
n
1
i
⎩ µ = lim
t
n→∞ n
i=1 At δX i ,
t
where the ν, Bi , i = 1, 2, . . . , are independent Brownian motions, and
˜ µ) = b(x) − c(x)β(x, µ)
β(x, µ) = µ, h − h(x) and b(x,
for x ∈ Rd and µ ∈ P (Rd ). It can be proved that πt is a solution to
equation (1.16), i.e. equation (1.16) holds with µt replaced by πt for suitable
(Xti , Ait ), i = 1, 2, . . . .
Theorem 1.2 Under suitable conditions, the infinite system equation (1.16)
has a unique solution (X, A, µ).
Next, we proceed to provide an intuitive proof for the uniqueness of the
solution to equation (1.15). Let {µt } be another solution to equation (1.15).
Then {µt } is a solution to the following linear SPDE:
ηt , φ = π0 , φ +
t
0
ηs , Lφ ds +
t
0
ηs , βs∗ φ + ∇ ∗ φc) dνs ,
(1.17)
where
βs (x) = β(x, µs ).
With µt given, we consider a system of the form equation (1.16) as
follows:
Xti = X0i +
Ait = Ai0 +
t
t ˜
t
i
i
i
0 σ (Xs )dBs + 0 b(Xs , µs )ds + 0
t i ∗
i
0 As β (Xs , µs )dνs .
We define a measure-valued process
1
n→∞ n
n
Ait δXi .
µ˜ t = lim
t
i=1
c(Xsi )dνs ,
(1.18)
9
www.pdfgrip.com
10
1 : Introduction
Then, µ˜ t is a solution of equation (1.17). Since the linear SPDE has a
unique solution, we get that µ˜ t = µt , and hence, µt is a solution to the
system equation (1.16). By the uniqueness of the solution for the system
equation (1.16) we see that µt = πt . Thus, we have “proved” the following
Theorem 1.3 Under suitable conditions, the filtering equation (1.15) has a
unique solution.
Next, in Chapter 8, we study the numerical approximation for the optimal filter using some branching particle systems. Let {xni , i = 1, 2, . . . , n}
be i.i.d. random vectors in Rd with common distribution π0 ∈ P (Rd ). Then
π0n
1
≡
n
n
δxni → π0 in P (Rd ).
i=1
n−2α ,
0 < α < 1. For j = 0, 1, 2, . . ., we suppose that there
Let δ =
n
are mj number of particles alive at time t = jδ. During the time interval
(jδ, (j + 1)δ), the particles move according to the following diffusions: For
i = 1, 2, . . . , mnj ,
i
+
Xti = Xjδ
t
jδ
σ (Xsi )dBis +
t
jδ
ˆ i )ds +
b(X
s
t
jδ
c(Xsi )dYs ,
(1.19)
where bˆ = b − ch.
At the end of the interval, the ith particle (i = 1, 2, . . . , mnj ) branches
i
of offspring satisfying
(independent of others) into a random number ξj+1
i
=
ξj+1
˜ n (X i )]
˜ n (X i )},
[M
with probability 1 − {M
j+1
j+1
˜ n (X i )},
˜ n (X i )] + 1 with probability {M
[M
j+1
j+1
where {x} = x − [x] is the fraction of x,
˜ n (X i ) =
M
j+1
n (X i )
Mj+1
mnj
n
=1 Mj+1 (X
1
mnj
,
(1.20)
|h(Xti )|2 dt .
(1.21)
)
and
n
Mj+1
(X i ) = exp
( j+1)δ
jδ
h∗ (Xti )dYt −
1
2
( j+1)δ
jδ
Now we define the approximate filter as follows:
πtn
1
= n
mj
mnj
˜ n (X i , t)δ i ,
M
j
X
t
i=1
jδ ≤ t < (j + 1)δ,