Tải bản đầy đủ (.pdf) (632 trang)

CAMBRIDGE STUDIES IN ADVANCED MATHEMATICS 100MARKOV pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.67 MB, 632 trang )


P1: JZP
0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0
This page intentionally left blank
P1: JZP
0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0
CAMBRIDGE STUDIES IN
ADVANCED MATHEMATICS 100
MARKOV PROCESSES, GAUSSIAN
PROCESSES, AND LOCAL TIMES
Written by two of the foremost researchers in the field, this book stud-
ies the local times of Markov processes by employing isomorphism theo-
rems that relate them to certain associated Gaussian processes. It builds
to this material through self-contained but harmonized “mini-courses”
on the relevant ingredients, which assume only knowledge of measure-
theoretic probability. The streamlined selection of topics creates an easy
entrance for students and experts in related fields.
The book starts by developing the fundamentals of Markov process
theory and then of Gaussian process theory, including sample path prop-
erties. It then proceeds to more advanced results, bringing the reader to
the heart of contemporary research. It presents the remarkable isomor-
phism theorems of Dynkin and Eisenbaum and then shows how they
can be applied to obtain new properties of Markov processes by using
well-established techniques in Gaussian process theory. This original,
readable book will appeal to both researchers and advanced graduate
students.
i
P1: JZP
0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0
Cambridge Studies in Advanced Mathematics
Editorial Board:


Bela Bollobas, William Fulton, Anatole Katok, Frances Kirwan, Peter Sarnak,
Barry Simon, Burt Totaro
All the titles listed below can be obtained from good booksellers or from
Cambridge University Press. For a complete series listing, visit
/>Recently published
71 R. Blei Analysis in Integer and Fractional Dimensions
72 F. Borceux & G. Janelidze Galois Theories
73 B. Bollobas Random Graphs 2nd Edition
74 R. M. Dudley Real Analysis and Probability 2nd Edition
75 T. Sheil-Small Complex Polynomials
76 C. Voisin Hodge Theory and Complex Algebraic Geometry I
77 C. Voisin Hodge Theory and Complex Algebraic Geometry II
78 V. Paulsen Completely Bounded Maps and Operator Algebras
79 F. Gesztesy & H. Holden Soliton Equations and Their Algebra-Geometric
Solutions I
81 S. Mukai An Introduction to Invariants and Moduli
82 G. Tourlakis Lectures in Logic and Set Theory I
83 G. Tourlakis Lectures in Logic and Set Theory II
84 R. A. Bailey Association Schemes
85 J. Carlson, S. M¨uller-Stach & C. Peters Period Mappings and Period Domains
86 J. J. Duistermaat & J. A. C. Kolk Multidimensional Real Analysis I
87 J. J. Duistermaat & J. A. C. Kolk Multidimensional Real Analysis II
89 M. C. Golumbic & A. N. Trenk Tolerance Graphs
90 L. H. Harper Global Methods for Combinatorial Isoperimetric Problems
91 I. Moerdijk & J. Mrcun Introduction to Foliations and Lie Groupoids
92 J. Koll´ar, K. E. Smith & A. Corti Rational and Nearly Rational Varieties
93 D. Applebaum L´evy Processes and Stochastic Calculus
95 M. Schechter An Introduction to Nonlinear Analysis
96 R. Carter Lie Algebras of Finite and Affine Type
97 H. L. Montgomery & R. C. Vaughan Multiplicative Number Theory

98 I. Chavel Riemannian Geometry
99 D. Goldfeld Automorphic Forms and L-Functions for the Group GL(n,R)
ii
P1: JZP
0 521 86300 7 Printer: cupusbw 0521863007pre May 17, 2006 18:4 Char Count= 0
MARKOV PROCESSES, GAUSSIAN
PROCESSES, AND LOCAL TIMES
MICHAEL B. MARCUS
City College and the CUNY Graduate Center
JAY ROSEN
College of Staten Island and the CUNY Graduate Center
iii
cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo
Cambridge University Press
The Edinburgh Building, Cambridge cb2 2ru, UK
First published in print format
isbn-13 978-0-521-86300-1
isbn-13 978-0-511-24696-8
© Michael B. Marcus and Jay Rosen 2006
2006
Information on this title: www.cambrid
g
e.or
g
/9780521863001
This publication is in copyright. Subject to statutory exception and to the provision of
relevant collective licensing agreements, no reproduction of any part may take place
without the written permission of Cambridge University Press.
isbn-10 0-511-24696-X

isbn-10 0-521-86300-7
Cambridge University Press has no responsibility for the persistence or accuracy of urls
for external or third-party internet websites referred to in this publication, and does not
guarantee that any content on such websites is, or will remain, accurate or appropriate.
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
hardback
eBook (NetLibrary)
eBook (NetLibrary)
hardback
To our wives
Jane Marcus
and
Sara Rosen

Contents
1 Introduction page 1
1.1Preliminaries6
2BrownianmotionandRay–KnightTheorems11
2.1 Brownian motion 11
2.2 The Markov property 19
2.3 Standard augmentation 28
2.4 Brownian local time 31
2.5 Terminal times 42
2.6 The First Ray–Knight Theorem 48
2.7 The Second Ray–Knight Theorem 53
2.8 Ray’s Theorem 56
2.9 Applications of the Ray–Knight Theorems 58
2.10 Notes and references 61
3 Markov processes and local times 62

3.1 The Markov property 62
3.2 The strong Markov property 67
3.3 Strongly symmetric Borel right processes 73
3.4 Continuous potential densities 78
3.5 Killing a process at an exponential time 81
3.6 Local times 83
3.7 Jointly continuous local times 98
3.8 Calculating u
T
0
and u
τ(λ)
105
3.9 The h-transform 109
3.10 Moment generating functions of local times 115
3.11 Notes and references 119
4 Constructing Markov processes 121
4.1 Feller processes 121
4.2 L´evy processes 135
vii
viii Contents
4.3 Diffusions 144
4.4 Left limits and quasi left continuity 147
4.5 Killing at a terminal time 152
4.6 Continuous local times and potential densities 162
4.7 Constructing Ray semigroups and Ray processes 164
4.8 Local Borel right processes 178
4.9 Supermedian functions 182
4.10 Extension Theorem 184
4.11 Notes and references 188

5 Basic properties of Gaussian processes 189
5.1 Definitions and some simple properties 189
5.2 Moment generating functions 198
5.3 Zero–one laws and the oscillation function 203
5.4 Concentration inequalities 214
5.5 Comparison theorems 227
5.6 Processes with stationary increments 235
5.7 Notes and references 240
6 Continuity and boundedness of Gaussian processes 243
6.1 Sufficient conditions in terms of metric entropy 244
6.2 Necessary conditions in terms of metric entropy 250
6.3 Conditions in terms of majorizing measures 255
6.4 Simple criteria for continuity 270
6.5 Notes and references 280
7 Moduli of continuity for Gaussian processes 282
7.1 General results 282
7.2 Processes on R
n
297
7.3 Processes with spectral densities 317
7.4 Local moduli of associated processes 324
7.5 Gaussian lacunary series 336
7.6 Exact moduli of continuity 347
7.7 Squares of Gaussian processes 356
7.8 Notes and references 361
8 Isomorphism Theorems 362
8.1 Isomorphism theorems of Eisenbaum and Dynkin 362
8.2 The Generalized Second Ray–Knight Theorem 370
8.3 Combinatorial proofs 380
8.4 Additional proofs 390

8.5 Notes and references 394
Contents ix
9 Sample path properties of local times 396
9.1 Bounded discontinuities 396
9.2 A necessary condition for unboundedness 403
9.3 Sufficient conditions for continuity 406
9.4 Continuity and boundedness of local times 410
9.5 Moduli of continuity 417
9.6 Stable mixtures 437
9.7 Local times for certain Markov chains 441
9.8 Rate of growth of unbounded local times 447
9.9 Notes and references 454
10 p-variation 456
10.1 Quadratic variation of Brownian motion 456
10.2 p-variation of Gaussian processes 457
10.3 Additional variational results for Gaussian processes 467
10.4 p-variation of local times 479
10.5 Additional variational results for local times 482
10.6 Notes and references 495
11 Most visited sites of symmetric stable processes 497
11.1 Preliminaries 497
11.2 Most visited sites of Brownian motion 504
11.3 Reproducing kernel Hilbert spaces 511
11.4 The Cameron–Martin Formula 516
11.5 Fractional Brownian motion 519
11.6 Most visited sites of symmetric stable processes 523
11.7 Notes and references 526
12 Local times of diffusions 530
12.1 Ray’s Theorem for diffusions 530
12.2 Eisenbaum’s version of Ray’s Theorem 534

12.3 Ray’s original theorem 537
12.4 Markov property of local times of diffusions 543
12.5 Local limit laws for h-transforms of diffusions 549
12.6 Notes and references 550
13 Associated Gaussian processes 551
13.1 Associated Gaussian processes 552
13.2 Infinitely divisible squares 560
13.3 Infinitely divisible squares and associated processes 570
13.4 Additional results about M-matrices 578
13.5 Notes and references 579
x Contents
14 Appendix 580
14.1 Kolmogorov’s Theorem for path continuity 580
14.2 Bessel processes 581
14.3 Analytic sets and the Projection Theorem 583
14.4 Hille–Yosida Theorem 587
14.5 Stone–Weierstrass Theorems 589
14.6 Independent random variables 590
14.7 Regularly varying functions 594
14.8 Some useful inequalities 596
14.9 Some linear algebra 598
References 603
Index of notation 611
Author index 613
Subject index 616
1
Introduction
We found it difficult to choose a title for this book. Clearly we are not
covering the theory of Markov processes, Gaussian processes, and local
times in one volume. A more descriptive title would have been “A Study

of the Local Times of Strongly Symmetric Markov Processes Employ-
ing Isomorphisms That Relate Them to Certain Associated Gaussian
Processes.” The innovation here is that we can use the well-developed
theory of Gaussian processes to obtain new results about local times.
Even with the more restricted title there is a lot of material to cover.
Since we want this book to be accessible to advanced graduate students,
we try to provided a self-contained development of the Markov process
theory that we require. Next, since the crux of our approach is that we
can use sophisticated results about the sample path properties of Gaus-
sian processes to obtain similar sample path properties of the associated
local times, we need to present this aspect of the theory of Gaussian
processes. Furthermore, interesting questions about local times lead us
to focus on some properties of Gaussian processes that are not usually
featured in standard texts, such as processes with spectral densities or
those that have infinitely divisible squares. Occasionally, as in the study
of the p-variation of sample paths, we obtain new results about Gaussian
processes.
Our third concern is to present the wonderful, mysterious isomor-
phism theorems that relate the local times of strongly symmetric Markov
processes to associated mean zero Gaussian processes. Although some
inkling of this idea appeared earlier in Brydges, Fr¨ohlich and Spencer
(1982) we think that credit for formulating it in an intriguing and usable
format is due to E. B. Dynkin (1983), (1984). Subsequently, after our ini-
tial paper on this subject, Marcus and Rosen (1992d), in which we use
Dynkin’s Theorem, N. Eisenbaum (1995) found an unconditioned iso-
morphism that seems to be easier to use. After this Eisenbaum, Kaspi,
1
2 Introduction
Marcus, Rosen and Shi (2000) found a third isomorphism theorem, which
we refer to as the Generalized Second Ray–Knight Theorem, because it

is a generalization of this important classical result.
Dynkin’s and Eisenbaum’s proofs contain a lot of difficult combina-
torics, as does our proof of Dynkin’s Theorem in Marcus and Rosen
(1992d). Several years ago we found much simpler proofs of these theo-
rems. Being able to present this material in a relatively simple way was
our primary motivation for writing this book.
The classical Ray–Knight Theorems are isomorphisms that relate lo-
cal times of Brownian motion and squares of independent Brownian mo-
tions. In the three isomorphism theorems we just referred to, these the-
orems are extended to give relationships between local times of strongly
symmetric Markov processes and the squares of associated Gaussian pro-
cesses. A Markov process with symmetric transition densities is strongly
symmetric. Its associated Gaussian process is the mean zero Gaussian
process with covariance equal to its 0-potential density. (If the Markov
process, say X, does not have a 0-potential, one can consider

X, the
process X killed at the end of an independent exponential time with
mean 1/α. The 0-potential density of

X is the α-potential density of
X.)
As an example of how the isomorphism theorems are used and of the
kinds of results we obtain, we mention that we show that there exists
a jointly continuous version of the local times of a strongly symmet-
ric Markov process if and only if the associated Gaussian process has
a continuous version. We obtain this result as an equivalence, without
obtaining conditions that imply that either process is continuous. How-
ever, conditions for the continuity of Gaussian processes are known, so
we know them for the joint continuity of the local times.

M. Barlow and J. Hawkes obtained a sufficient condition for the joint
continuity of the local times of L´evy processes in Barlow (1985) and
Barlow and Hawkes (1985), which Barlow showed, in Barlow (1988), is
also necessary. Gaussian processes do not enter into the proofs of their
results. (Although they do point out that their conditions are also nec-
essary and sufficient conditions for the continuity of related stationary
Gaussian processes.) This stimulating work motivated us to look for a
more direct link between Gaussian processes and local times and led us
to Dynkin’s isomorphism theorem.
We must point out that the work of Barlow and Hawkes just cited ap-
plies to all L´evy processes whereas the isomorphism theorem approach
that we present applies only to symmetric L´evy processes. Neverthe-
less, our approach is not limited to L´evy processes and also opens up
Introduction 3
the possibility of using Gaussian process theory to obtain many other
interesting properties of local times.
Another confession we must make is that we do not really under-
stand the actual relationship between local times of strongly symmetric
Markov processes and their associated Gaussian processes. That is, we
have several functional equivalences between these disparate objects and
can manipulate them to obtain many interesting results, but if one asks
us, as is often the case during lectures, to give an intuitive description of
how local times of Markov processes and Gaussian process are related, we
must answer that we cannot. We leave this extremely interesting ques-
tion to you. Nevertheless, there now exist interesting characterizations
of the Gaussian processes that are associated with Markov processes.
We say more about this in our discussion of the material in Chapter 13.
The isomorphism theorems can be applied to very general classes of
Markov processes. In this book, with the exception of Chapter 13, we
consider Borel right processes. To ease the reader into this degree of

generality, and to give an idea of the direction in which we are going,
in Chapter 2 we begin the discussion of Markov processes by focusing
on Brownian motion. For Brownian motion these isomorphisms are old
stuff but because, in the case of Brownian motion, the local times of
Brownian motion are related to the squares of independent Brownian
motion, one does not really leave the realm of Markov processes. That
is, we think that in the classical Ray–Knight Theorems one can view
Brownian motion as a Markov process, which it is, rather than as a
Gaussian process, which it also is.
Chapters 2–4 develop the Markov process material we need for this
book. Naturally, there is an emphasis on local times. There is also
an emphasis on computing the potential density of strongly symmetric
Markov processes, since it is through the potential densities that we
associate the local times of strongly symmetric Markov processes with
Gaussian processes. Even though Chapter 2 is restricted to Brownian
motion, there is a lot of fundamental material required to construct the
σ-algebras of the probability space that enables us to study local times.
We do this in such a way that it also holds for the much more general
Markov processes studied in Chapters 3 and 4. Therefore, although
many aspects of Chapter 2 are repeated in greater generality in Chapters
3 and 4, the latter two chapters are not independent of Chapter 2.
In the beginning of Chapter 3 we study general Borel right processes
with locally compact state spaces but soon restrict our attention to
strongly symmetric Borel right processes with continuous potential den-
sities. This restriction is tailored to the study of local times of Markov
4 Introduction
processes via their associated mean zero Gaussian processes. Also, even
though this restriction may seem to be significant from the perspective
of the general theory of Markov processes, it makes it easier to intro-
duce the beautiful theory of Markov processes. We are able to obtain

many deep and interesting results, especially about local times, relatively
quickly and easily. We also consider h-transforms and generalizations of
Kac’s Theorem, both of which play a fundamental role in proving the
isomorphism theorems and in applying them to the study of local times.
Chapter 4 deals with the construction of Markov processes. We first
construct Feller processes and then use them to show the existence of
L´evy processes. We also consider several of the finer properties of Borel
right processes. Lastly, we construct a generalization of Borel right
processes that we call local Borel right processes. These are needed in
Chapter 13 to characterize associated Gaussian processes. This requires
the introduction of Ray semigroups and Ray processes.
Chapters 5–7 are an exposition of sample path properties of Gaussian
processes. Chapter 5 deals with structural properties of Gaussian pro-
cesses and lays out the basic tools of Gaussian process theory. One of the
most fundamental tools in this theory is the Borell, Sudakov–Tsirelson
isoperimetric inequality. As far as we know this is stated without a com-
plete proof in earlier books on Gaussian processes because the known
proofs relied on the Brun–Minkowski inequality, which was deemed to be
too far afield to include its proof. We give a new, analytical proof of the
Borell, Sudakov–Tsirelson isoperimetric inequality due to M. Ledoux in
Section 5.4.
Chapter 6 presents the work of R. M. Dudley, X. Fernique and M. Ta-
lagrand on necessary and sufficient conditions for continuity and bound-
edness of sample paths of Gaussian processes. This important work
has been polished throughout the years in several texts, Ledoux and
Talagrand (1991), Fernique (1997), and Dudley (1999), so we can give
efficient proofs. Notably, we give a simpler proof of Talagrand’s neces-
sary condition for continuity involving majorizing measures, also due to
Talagrand, than the one in Ledoux and Talagrand (1991). Our presen-
tation in this chapter relies heavily on Fernique’s excellent monograph,

Fernique (1997).
Chapter 7 considers uniform and local moduli of continuity of Gaus-
sian processes. We treat this question in general in Section 7.1. In most
of the remaining sections in this chapter, we focus our attention on real-
valued Gaussian processes with stationary increments, {G(t),t ∈ R
1
},
for which the increments variance, σ
2
(t −s):=E(G(t) −G(s))
2
, is rela-
tively smooth. This may appear old fashioned to the Gaussian purist but
Introduction 5
it is exactly these processes that are associated with real-valued L´evy
processes. (And L´evy processes with values in R
n
have local times only
when n = 1.) Some results developed in this section and its applications
in Section 9.5 have not been published elsewhere.
Chapters 2–7 develop the prerequisites for the book. Except for Sec-
tion 3.7, the material at the end of Chapter 4 relating to local Borel right
processes, and a few other items that are referenced in later chapters,
they can be skipped by readers with a good background in the theory
of Gaussian and Markov processes.
In Chapter 8 we prove the three main isomorphism theorems that we
use. Even though we are pleased to be able to give simple proofs that
avoid the difficult combinatorics of the original proofs of these theorems,
in Section 8.3 we give the combinatoric proofs, both because they are
interesting and because they may be useful later on.

Chapter 9 puts everything together to give sample path properties
of local times. Some of the proofs are short, simply a reiteration of
results that have been established in earlier chapters. At this point
in the book we have given all the results in our first two joint papers
on local times and isomorphism theorems (Marcus and Rosen, 1992a,
1992d). We think that we have filled in all the details and that many of
the proofs are much simpler. We have also laid the foundation to obtain
other interesting sample path properties of local times, which we present
in Chapters 10–13.
In Chapter 10 we consider the p-variation of the local times of sym-
metric stable processes 1 <p≤ 2 (this includes Brownian motion).
To use our isomorphism theorem approach we first obtain results on
the p-variation of fractional Brownian motion that generalize results of
Dudley (1973) and Taylor (1972) that were obtained for Brownian mo-
tion. These are extended to the squares of fractional Brownian motion
and then carried over to give results about the local times of symmetric
stable processes.
Chapter 11 presents results of Bass, Eisenbaum and Shi (2000) on the
range of the local times of symmetric stable processes as time goes to
infinity and shows that the most visited site of such processes is transient.
Our approach is different from theirs. We use an interesting bound for
the behavior of stable processes in a neighborhood of the origin due to
Molchan (1999), which itself is based on properties of the reproducing
kernel Hilbert spaces of fractional Brownian motions.
In Chapter 12 we reexamine Ray’s early isomorphism theorem for the
h-transform of a transient regular symmetric diffusion, Ray (1963) and
6 Introduction
give our own, simpler version. We also consider the Markov properties
of the local times of diffusions.
In Chapter 13, which is based on recent work of N. Eisenbaum and

H. Kaspi that appears in Eisenbaum (2003), Eisenbaum (2005), and
Eisenbaum and Kaspi (2006), we take up the problem of characteriz-
ing associated Gaussian processes. To obtain several equivalencies we
must generalize Borel right processes to what we call local Borel right
processes. In Theorem 13.3.1 we see that associated Gaussian processes
are just a little less general than the class of Gaussian processes that
have infinitely divisible squares. Gaussian processes with infinitely di-
visible squares are characterized in Griffiths (1984) and Bapat (1989).
We present their results in Section 13.2.
We began our joint research that led to this book over 19 years ago.
In the course of this time we received valuable help from R. Adler, M.
Barlow, H. Kaspi, E. B. Dynkin, P. Fitzsimmons, R. Getoor, E. Gin´e,
M. Talagrand, and J. Zinn. We express our thanks and gratitude to
them. We also acknowledge the help of P A. Meyer.
In the preparation of this book we received valuable assistance and
advice from O. Daviaud, S. Dhamoon, V. Dobric, N. Eisenbaum, S.
Evans, P. Fitzsimmons, C. Houdr´e, H. Kaspi, W. Li, and J. Rosinski.
We thank them also.
We are also grateful for the continued support of the National Science
Foundation and PSC–CUNY throughout the writing of this book.
1.1 Preliminaries
In this book Z denotes the integers both positive and negative and IN or
sometimes N denotes the the positive integers including 0. R
1
denotes
the real line and R
+
the positive half line (including zero). R denotes
the extended real line [−∞, ∞]. R
n

denotes n-dimensional space and
|·|denotes Euclidean distance in R
n
. We say that a real number a is
positive if a ≥ 0. To specify that a>0, we might say that it is strictly
positive. A similar convention is used for negative and strictly negative.
Measurable spaces:
A measurable space is a pair (Ω, F), where Ω is a set
and F is a sigma-algebra of subsets of Ω. If Ω is a topological space, we
use B(Ω) to denote the Borel σ-algebra of Ω. Bounded B(Ω) measurable
functions on Ω are denoted by B
b
(Ω).
Let t ∈ R
+
. A filtration of F is an increasing family of sub σ-algebras
F
t
of F, that is, for 0 ≤ s<t<∞, F
s
⊂F
t
⊂Fwith F = ∪
0≤t<∞
F
t
.
1.1 Preliminaries 7
(Sometimes we describe this by saying that F is filtered.) To emphasize
a specific filtration F

t
of F, we sometimes write (Ω, F, F
t
).
Let M and N denote two σ-algebras of subsets of Ω. We use M∨N
to denote the σ-algebra generated by M∪N.
Probability spaces:
A probability space is a triple (Ω, F,P), where (Ω, F)
is measurable space and P is a probability measure on Ω. A random
variable, say X, is a measurable function on (Ω, F,P). In general we
let E denote the expectation operator on the probability space. When
there are many random variables defined on (Ω, F,P), say Y,Z, ,we
use E
Y
to denote expectation with respect to Y . When dealing with a
probability space, when it seems clear what we mean, we feel free to use
E or even expressions like E
Y
without defining them. As usual, we let
ω denote the elements of Ω. As with E, we often use ω in this context
without defining it.
When X is a random variable we call a number a a median of X if
P (X ≤ a) ≥
1
2
and P (X ≥ a) ≥
1
2
. (1.1)
Note that a is not necessarily unique.

A stochastic process X on (Ω, F,P) is a family of measurable functions
{X
t
,t∈ I}, where I is some index set. In this book, t usually represents
“time” and we generally consider {X
t
,t ∈ R
+
}. σ(X
r
; r ≤ t) denotes
the smallest σ-algebra for which {X
r
; r ≤ t} is measurable. Sometimes
it is convenient to describe a stochastic process as a random variable
on a function space, endowed with a suitable σ-algebra and probability
measure.
In general, in this book, we reserve (Ω, F,P) for a probability space.
We generally use (S, S,µ) to indicate more general measure spaces. Here
µ is a positive (i.e., nonnegative) σ-finite measure.
Function spaces:
Let f be a measurable function on (S, S,µ). The L
p
(µ)
(or simply L
p
), 1 ≤ p<∞, spaces are the families of functions f for
which

S

|f(s)|
p
dµ(s) < ∞ with
f
p
:=


S
|f(s)|
p
dµ(s)

1/p
. (1.2)
Sometimes, when we need to be precise, we may write f
L
p
(S)
instead
of f
p
. As usual we set
f

= sup
s∈S
|f(s)|. (1.3)
These definitions have analogs for sequence spaces. For 1 ≤ p<∞, 
p

8 Introduction
is the family of sequences {a
k
}

k=0
of real or complex numbers such that


k=0
|a
k
|
p
< ∞. In this case, a
k

p
:= (


k=0
|a
k
|
p
)
1/p
and a
k



:=
sup
0≤k<∞
|a
k
|. We use 
n
p
to denote sequences in 
p
with n elements.
Let m be a measure on a topological space (S, S). By an approxi-
mate identity or δ-function at y, with respect to m, we mean a family
{f
,y
; >0} of positive continuous functions on S such that

f
,y
(x)
dm(x) = 1 and each f
,y
is supported on a compact neighborhood K

of
y with K

↓{y} as  → 0.

Let f and g be two real-valued functions on R
1
. We say that f is
asymptotic to g at zero and write f ∼ g if lim
x→0
f(x)/g(x)=1. We
say that f is comparable to g at zero and write f ≈ g if there exist
constants 0 <C
1
≤ C
2
< ∞ such that C
1
≤ lim inf
x→0
f(x)/g(x) and
lim sup
x→0
f(x)/g(x) ≤ C
2
. We use essentially the same definitions at
infinity.
Let f be a function on R
1
. We use the notation lim
y↑↑x
f(y)tobethe
limit of f(y)asy increases to x, for all y<x, that is, the left-hand (or
simply left) limit of f at x.
Metric spaces:

Let (S, τ) be a locally compact metric or pseudo-metric
space. A pseudo-metric has the same properties as a metric except that
τ(s, t) = 0 does not imply that s = t. Abstractly, one can turn a pseudo-
metric into a metric by making the zeros of the pseudo-metric into an
equivalence class, but in the study of stochastic processes pseudo-metrics
are unavoidable. For example, suppose that X = {X(t),t ∈ [0, 1]} is a
real-valued stochastic process. In studying sample path properties of X
it is natural to consider (R
1
, |·|), a metric space. However, X may be
completely determined by an L
2
metric, such as
d(s, t):=d
X
(s, t):=(E(X(s) − X(t))
2
)
1/2
(1.4)
(and an additional condition such as EX
2
(t) = 1 ). Therefore, it is
natural to also consider the space (R
1
,d). This may be a pseudo-metric
space since d need not be a metric on R
1
.
If A ⊂ S, we set

τ(s, A) := inf
u∈A
τ(s, u). (1.5)
We use C(S) to denote the continuous functions on S, C
b
(S)tode-
note the bounded continuous functions on S, and C
+
b
(S) to denote the
positive bounded continuous functions on S. We use C
κ
(S) to denote
the continuous functions on S with compact support; C
0
(S) denotes the
functions on S that go to 0 at infinity. Nevertheless, C

0
(S) denotes in-
finitely differentiable functions on S with compact support (whenever S
1.1 Preliminaries 9
is a space for which this is defined). In all these cases we mean continuity
with respect to the metric or pseudo-metric τ.
We say that a function is locally uniformly continuous on a measurable
set in (S, τ) if it is uniformly continuous on all compact subsets of (S, τ ).
We say that a sequence of functions converges locally uniformly on (S, τ )
if it converges uniformly on all compact subsets of (S, τ ).
Separability:
Let T be a separable metric space, and let X = {X(t),t ∈

T } be a stochastic process on (Ω, F,P) with values in
R
n
. X is said to
be separable if there is a countable set D ⊂ T and a P -null set Λ ⊂F
such that, for any open set U ⊂ T and closed set A ⊂
R
n
,
{X(t) ∈ A, t ∈ D ∩ U}/{X(t) ∈ A, t ∈ U}⊂Λ. (1.6)
If X is separable and U ⊂ T is an open set and Λ is as above, then
ω/∈ Λ implies
sup
t∈D∩U
|X(t, ω)| = sup
t∈U
|X(t, ω)| (1.7)
inf
t∈D∩U
|X(t, ω)| = inf
t∈U
|X(t, ω)|.
If T is a separable metric space, every stochastic process X = {X(t),
t ∈ T } with values in
R
n
has a separable version

X = {


X(t),t ∈ T },
that is, P


X(t)=X(t)

= 1, for all t ∈ T , and

X is separable for some
D and Λ.
If X is stochastically continuous, that is, lim
t→t
0
P (|X(t) −X(t
0
)| >
) = 0, for every >0 and t
0
∈ T, then any countable dense set V ⊂ T
serves as the set D in the separability condition (sometimes called the
separability set). The P -null set Λ generally depends on the choice of
V .
Fourier transform:
We often give results with precise constants, so we
need to describe what version of the Fourier transform we are using. Let
f ∈ L
2
(R
1
). Consistent with the standard definition of the characteristic

function, the Fourier transform

f of f is defined by

f(λ)=


−∞
e
iλx
f(x) dx, (1.8)
where the integral exists in the L
2
sense. The inverse Fourier transform
is given by
f(x)=
1



−∞
e
−iλx

f(λ) dλ. (1.9)
10 Introduction
With this normalization, Parseval’s Theorem is


−∞

f(x)g(x) dx =
1



−∞

f(λ)
g(λ) dλ. (1.10)
2
Brownian motion and Ray–Knight Theorems
In this book we develop relationships between the local times of strongly
symmetric Markov processes and corresponding Gaussian processes.This
was done for Brownian motion over 40 years ago in the famous Ray–
Knight Theorems. In this chapter, which gives an overview of significant
parts of the book, we discuss Brownian motion, its local times, and
the Ray–Knight Theorems with an emphasis on those definitions and
properties which we generalize to a much larger class of processes in
subsequent chapters. Much of the material in this chapter is repeated
in greater generality in subsequent chapters.
2.1 Brownian motion
A normal random variable with mean zero and variance t, denoted by
N(0,t), is a random variable with a distribution function that has den-
sity
p
t
(x)=
e
−x
2

/2t

2πt
x ∈ R
1
(2.1)
with respect to Lebesgue measure. (It is easy to check that a random
variable with density p
t
(x) does have mean zero and variance t.) In
anticipation of using p
t
as the transition density of a Markov process,
we sometimes use p
t
(x, y) to denote p
t
(y − x).
We give some important calculations involving p
t
.
11
12 Brownian motion and Ray–Knight Theorems
Lemma 2.1.1
(1) The Fourier transform of p
t
(x) is
p
t
(λ):=



−∞
e
iλx
p
t
(x) dx = e
−tλ
2
/2
. (2.2)
Equivalently, if ξ is N(0,t)
E(e
iλξ
)=e
−tλ
2
/2
. (2.3)
(2) If ξ is N(0,t) and ζ is N(0,s) and ξ and ζ are independent, then
ξ + ζ is N (0,s+ t).
(3)


−∞
p
s
(x, y)p
t

(y, z) dy = p
s+t
(x, z). (2.4)
This equation is called the Chapman–Kolmogorov equation.
(4) For α>0,
u
α
(x):=


0
e
−αt
p
t
(x) dt =
e


2α|x|


. (2.5)
(We see below that u
α
is the α-potential density of standard Brow-
nian motion).
Proof For (2.2), we write iλx−x
2
/2t = −(x−iλt)

2
/2t−tλ
2
/2 so that
p
t
(λ)=e
−tλ
2
/2
1

2πt


−∞
e
−(x−iλt)
2
/2t
dx (2.6)
= e
−tλ
2
/2
1

2πt



−∞
e
−x
2
/2t
dx = e
−tλ
2
/2
.
For the second equality note that (2πt)
−1/2

C
N
exp(−z
2
/(2t)) dz =0,
where C
N
is the rectangle in the complex plane determined by {x|−N ≤
x ≤ N} and {x − iλt|−N ≤ x ≤ N } (since exp(−z
2
/(2t)) is analytic),
and then take the limit as N goes to infinity.
Equation (2.3) is simply a rewriting of (2.2). It immediately gives
(2). Equation (2.4), for z = 0, follows from (2) and the fact that the
density of ξ + ζ, the sum of the independent random variables ξ and ζ,
is given by the convolution of the densities of ξ and ζ. For general z we
need only note that by a change of variables,



−∞
p
s
(x, y)p
t
(y, z) dy =


−∞
p
s
(x −z, y)p
t
(y, 0) dy.
For a slightly more direct proof of (2.4) consider p
t
(x, y)=p
t
(y −x)
as a function of y for some fixed x. The Fourier transform of p
t
(y −x)
2.1 Brownian motion 13
is e
iλx
p
t
(λ). Similarly, the Fourier transform of p

t
(z − y)ise
iλz
p
t
(λ).
By Parseval’s Theorem (1.10), the left-hand side of (2.4) is
1



−∞
e
iλ(x−z)
p
t
(λ) p
s
(λ) dλ =
1



−∞
e
iλ(x−z)
e
−(t+s)λ
2
/2


= p
s+t
(x, z) (2.7)
by (1.8), (1.9), and (2.2).
To prove (2.5) we note that by Fubini’s Theorem and (2.2) the Fourier
transform of u
α
(x)is

u
α
(λ)=


0
e
−αt
p
t
(λ) dt =
1
α + λ
2
/2
. (2.8)
Taking the inverse Fourier transform we have
u
α
(x)=

1



−∞
e
−iλx
1
α + λ
2
/2
dλ. (2.9)
Evaluating this integral in the complex plane using residues we get (2.5).
(For x ≥ 0, use the contour (−ρ, ρ) ∪(ρe
−iθ
, 0 ≤ θ ≤ π) in the clockwise
direction and for x<0 use the contour (−ρ, ρ) ∪ (ρe

,π ≤ θ ≤ 2π)in
the counterclockwise direction.)
We define Brownian motion starting at 0 to be a stochastic process
W = {W
t
; t ∈ R
+
} that satisfies the following three properties:
(1) W has stationary and independent increments.
(2) W
t
law

= N(0,t), for all t ≥ 0. (In particular W
0
≡ 0.)
(3) t → W
t
is continuous.
Theorem 2.1.2 The three conditions defining Brownian motion are
consistent, that is, Brownian motion is well defined.
Proof We construct a Brownian motion starting at 0. We first con-
struct a probability

P on R
R
+
, the space of real-valued functions {f(t),
t ∈ [0, ∞)} equipped with the Borel product σ-algebra B(R
R
+
). Let X
t
be the natural evaluation X
t
(f)=f(t). We first define

P on sets of the
form {X
t
1
∈ A
1

, ,X
t
n
∈ A
n
} for all Borel measurable sets A
1
, ,A
n
in R and 0 = t
0
<t
1
< ···<t
n
by setting

P (X
t
1
∈ A
1
, ,X
t
n
∈ A
n
)=

n


i=1
p
t
i
−t
i−1
(z
i−1
,z
i
)
n

i=1
1
A
i
(z
i
) dz
i
.
(2.10)
Here 1
A
i
is the indicator function of A
i
and we set z

0
= 0. That this con-
struction is consistent follows from the Chapman–Kolmogorov equation

×