Tải bản đầy đủ (.pdf) (253 trang)

Attal s et al (eds) open quantum systems II the markovian approach (LNM 1881 2006)(ISBN 3540309926)(253s)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.39 MB, 253 trang )

Lecture Notes in Mathematics
Editors:
J.-M. Morel, Cachan
F. Takens, Groningen
B. Teissier, Paris

1881


S. Attal · A. Joye · C.-A. Pillet (Eds.)

Open Quantum
Systems II
The Markovian Approach

ABC


Editors
Stéphane Attal
Institut Camille Jordan
Universit é Claude Bernard Lyon 1
21 av. Claude Bernard
69622 Villeurbanne Cedex
France
e-mail:

Alain Joye
Institut Fourier
Universit é de Grenoble 1
BP 74


38402 Saint-Martin d'Hères Cedex
France
e-mail:

Claude-Alain Pillet
CPT-CNRS, UMR 6207
Université du Sud Toulon-Var
BP 20132
83957 La Garde Cedex
France
e-mail:
Library of Congress Control Number: 2006923432
Mathematics Subject Classification (2000): 37A60, 37A30, 47A05, 47D06, 47L30, 47L90,
60H10, 60J25, 81Q10, 81S25, 82C10, 82C70
ISSN print edition: 0075-8434
ISSN electronic edition: 1617-9692
ISBN-10 3-540-30992-6 Springer Berlin Heidelberg New York
ISBN-13 978-3-540-30992-5 Springer Berlin Heidelberg New York
DOI 10.1007/b128451
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from Springer. Violations
are liable for prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springer.com
c Springer-Verlag Berlin Heidelberg 2006
Printed in The Netherlands
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,

even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
A EX package
Typesetting: by the authors and SPI Publisher Services using a Springer LT
Cover design: design & production GmbH, Heidelberg

Printed on acid-free paper

SPIN: 11602620

V A 41/3100/ SPI

543210


Preface

This volume is the second in a series of three volumes dedicated to the lecture notes
of the summer school “Open Quantum Systems” which took place in the Institut
Fourier in Grenoble, from June 16th to July 4th 2003. The contributions presented in
these volumes are revised and expanded versions of the notes provided to the students
during the school. After the first volume, developing the Hamiltonian approach of
open quantum systems, this second volume is dedicated to the Markovian approach.
The third volume presents both approaches, but at the recent research level.
Open quantum systems
A quantum open system is a quantum system which is interacting with another
one. This is a general definition, but in general, it is understood that one of the systems is rather “small” or “simple” compared to the other one which is supposed to
be huge, to be the environment, a gas of particles, a beam of photons, a heat bath ...
The aim of quantum open system theory is to study the behaviour of this coupled
system and in particular the dissipation of the small system in favour of the large one.

One expects behaviours of the small system such as convergence to an equilibrium
state, thermalization ... The main questions one tries to answer are: Is there a unique
invariant state for the small system (or for the coupled system)? Does one always
converge towards this state (whatever the initial state is)? What speed of convergence
can we expect ? What are the physical properties of this equilibrium state ?
One can distinguish two schools in the way of studying such a situation. This is
true in physics as well as in mathematics. They represent in general, different groups
of researchers with, up to now, rather few contacts and collaborations. We call these
two approaches the Hamiltonian approach and the Markovian approach.
In the Hamiltonian approach, one tries to give a full description of the coupled
system. That is, both quantum systems are described, with their state spaces, with
their own Hamiltonians and their interaction is described through an explicit interaction Hamiltonian. On the tensor product of Hilbert spaces we end up with a total
Hamiltonian, and the goal is then to study the behaviour of the system under this
dynamics. This approach is presented in details in the volume I of this series.


VI

Preface

In the Markovian approach, one gives up trying to describe the large system.
The idea is that it may be too complicated, or more realistically we do not know
it completely. The study then concentrates on the effective dynamics which is induced on the small system. This dynamics is not a usual reversible Hamiltonian
dynamics, but is described by a particular semigroup acting on the states of the
small system.
Before entering into the heart of the Markovian approach and all its development,
in the next courses, let us have here an informal discussion on what this approach
exactly is.
The Markovian approach
We consider a simple quantum system H which evolves as if it were in contact

with an exterior quantum system. We do not try to describe this exterior system. It is
maybe too complicated, or more realistically we do not quite know it. We observe on
the evolution of the system H that it is evolving like being in contact with something
else, like an open system (by opposition with the usual notion of closed Hamiltonian
system in quantum mechanics). But we do not quite know what is effectively acting
on H. We have to deal with the efffective dynamics which is observed on H.
By such a dynamics, we mean that we look at the evolution of the states of the
system H. That is, for an initial density matrix ρ0 at time 0 on H, we consider the
state ρt at time t on H. The main assumption here is that this evolution
ρt = Pt (ρ0 )
is given by a semigroup. This is to say that the state ρt at time t determines the future
states ρt+h , without needing to know the whole past (ρs )s≤t .
Each of the mapping Pt is a general state transform ρ0 → ρt . Such a map should
be in particular trace-preserving and positivity-preserving. Actually these assumptions are not quite enough and the positivity-preserving property should be slightly
extended to a natural notion of completely positive map (see R. Rebolledo’s course).
We end up with a semigroup (Pt )t≥0 of completely positive maps. Under some continuity conditions, the famous Lindblad theorem (see R. Rebolledo’s course), shows
that the infinitesimal generator of such a semigroup is of the form
L(ρ) = i[H, ρ] +
i

1
1
Li ρL∗i − L∗i Li ρ − ρL∗i Li
2
2

for some self-adjoint bounded operator H on H and some bounded operators Li on
H. The evolution equation for the states of the system can be summarized into
d
ρt = L(ρt ).

dt
This is the so-called quantum master equation in physics. It is actually the starting
point in many physical articles on open quantum systems: a specific system to be
studied is described by its master equation with a given explicit Linblad generator L.


Preface

VII

The specific form of the generator L has to understood as follows. It is similar to
the decomposition of a Feller process generator (see L. Rey-Bellet’s first course) into
a first order differential part plus a second order differential part. Indeed, the first term
i[H, · ]
is typical of a derivation on an operator algebra. If L were reduced to that term only,
then Pt = etL is easily seen to act as follows:
Pt (X) = eitH Xe−itH .
That is, this semigroup extends into a group of automorphisms and describes a usual
Hamiltonian evolution. In particular it describes a closed quantum system, there is
no exterior system interacting with it.
The second type of terms have to be understood as follows. If L = L∗ then
1
1
LXL∗ − L∗ LX − XL∗ L = [L, [L, X]].
2
2
It is a double commutator, it is a typical second order differential operator on the
operator algebra. It carries the diffusive part of the dissipation of the small system in
favor of the exterior, like a Laplacian term in a Feller process generator.
When L does not satisfy L = L∗ we are left with a more complicated term

which is more difficult to interpret in classical terms. It has to be compared with the
jumping measure term in a general Feller process generator.
Now, that the semigroup and the generator are given, the quantum noises (see S.
Attal’s course) enter into the game in order to provide a dilation of the semigroup
(F. Fagnola’s course). That is, one can construct an appropriate Hilbert space F on
which quantum noises daij (t) live, and one can solve a differential equation on the
space H ⊗ F which is of the form of a Schr¨odinger equation perturbed by quantum
noises terms:
Kji Ut daij (t).
(1)
dUt = LUt dt +
i,j

This equation is an evolution equation, whose solutions are unitary operators on
H ⊗ F, so it describes a closed system (in interaction picture actually). Furthermore
it dilates the semigroup (Pt )t≥0 in the sense that, there exists a (pure) state Ω on F
such that if ρ is any state on H then
< Ω , Ut (ρ ⊗ I)Ut∗ Ω > = Pt (ρ).
This is to say that the effective dynamics (Pt )t≥0 we started with on H, which we
did not know what exact exterior system was the cause of, is obtained as follows: the
small system H is actually coupled to another system F and they interact according
to the evolution equation (1). That is, F acts like a source of (quantum) noises on
H. The effective dynamics on H is then obtained when averaging over the noises
through a certain state Ω.


VIII

Preface


This is exactly the same situation as the one of Markov processes with respect to
stochastic differential equations (L. Rey-Bellet’s first course). A Markov semigroup
is given on some function algebra. This is a completely deterministic dynamics which
describes an irreversible evolution. The typical generator, in the diffusive case say,
contains two types of terms.
First order differential terms which carry the ordinary part of the dynamics. If the
generator contains only such terms the dynamics is carried by an ordinary differential
equation and extends to a reversible dynamics.
Second order differential operator terms which carry the dissipative part of the
dynamics. These terms represent the negative part of the generator, the loss of energy
in favor of some exterior.
But in such a description of a dissipative system, the environment is not described. The semigroup only focuses on the effective dynamics induced on some
system by an environment. With the help of stochastic differential equations one can
give a model of the action of the environment. It is possible to solve an adequat
stochastic differential equation, involving Brownian motions, such that the resulting stochastic process be a Markov process with same semigroup as the one given
at the begining. Such a construction is nowadays natural and one often use it without thinking what this really means. To the state space where the function algebra
acts, we have to add a probability space which carries the noises (the Brownian motion). We have enlarged the initial space, the noise does not come naturally with the
function algebra. The resolution of the stochastic differential equation gives rise to
a solution living in this extended space (it is a stochastic process, a function of the
Brownian motions). It is only when avering over the noise (taking the expectation)
that one recovers the action of the semigroup on the function algebra.
We have described exactly the same situation as for quantum systems, as above.
Organization of the volume
The aim of this volume is to present this quantum theory in details, together with
its classical counterpart.
The volume actually starts with a first course by L. Rey-Bellet which presents the
classical theory of Markov processes, stochastic differential equations and ergodic
theory of Markov processes.
The second course by L. Rey-Bellet applies these techniques to a family of classical open systems. The associated stochastic differential equation is derived from an
Hamiltonian description of the model.

The course by S. Attal presents an introduction to the quantum theory of noises
and their connections with classical ones. It constructs the quantum stochastic integrals and proves the quantum Ito formula, which are the cornerstones of quantum
Langevin equations.
R. Rebolledo’s course presents the theory of completely positive maps, their representation theorems and the semigroup theory attached to them. This ends up with
the celebrated Lindblad’s theorem and the notion of quantum master equations.


Preface

IX

Finally, F. Fagnola’s course develops the theory of quantum Langevin equations
(existence, unitarity) and shows how quantum master equations can be dilated by
such equations.

Lyon, Grenoble, Toulon
September 2005

St´ephane Attal
Alain Joye
Claude-Alain Pillet


Contents

Ergodic Properties of Markov Processes
Luc Rey-Bellet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Markov Processes and Ergodic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.1 Transition probabilities and generators . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Stationary Markov processes and Ergodic Theory . . . . . . . . . . . . . . . .
4 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Control Theory and Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Hypoellipticity and Strong-Feller Property . . . . . . . . . . . . . . . . . . . . . . . . . . .
8 Liapunov Functions and Ergodic Properties . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
1
2
4
4
7
12
14
24
26
28
39

Open Classical Systems
Luc Rey-Bellet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Derivation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 How to make a heat reservoir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Markovian Gaussian stochastic processes . . . . . . . . . . . . . . . . . . . . . . .
2.3 How to make a Markovian reservoir . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Ergodic properties: the chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.1 Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Strong Feller Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Liapunov Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Heat Flow and Entropy Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Positivity of entropy production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Fluctuation theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Kubo Formula and Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41
41
44
44
48
50
52
56
57
58
66
69
71
75
77


XII

Contents


Quantum Noises
St´ephane Attal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.1 Repeated quantum interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.2 The Toy Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.3 Higher multiplicities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3 Itˆo calculus on Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.1 The continuous version of the spin chain: heuristics . . . . . . . . . . . . . . 93
3.2 The Guichardet space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3 Abstract Itˆo calculus on Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.4 Probabilistic interpretations of Fock space . . . . . . . . . . . . . . . . . . . . . . 105
4 Quantum stochastic calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.1 An heuristic approach to quantum noise . . . . . . . . . . . . . . . . . . . . . . . . 110
4.2 Quantum stochastic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.3 Back to probabilistic interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5 The algebra of regular quantum semimartingales . . . . . . . . . . . . . . . . . . . . . . 123
5.1 Everywhere defined quantum stochastic integrals . . . . . . . . . . . . . . . . 124
5.2 The algebra of regular quantum semimartingales . . . . . . . . . . . . . . . . . 127
6 Approximation by the toy Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.1 Embedding the toy Fock space into the Fock space . . . . . . . . . . . . . . . 130
6.2 Projections on the toy Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.4 Probabilistic interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.5 The Itˆo tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7 Back to repeated interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.1 Unitary dilations of completely positive semigroups . . . . . . . . . . . . . . 140
7.2 Convergence to Quantum Stochastic Differential Equations . . . . . . . . 142
8 Bibliographical comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Complete Positivity and the Markov structure of Open Quantum
Systems
Rolando Rebolledo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
1 Introduction: a preview of open systems in Classical Mechanics . . . . . . . . . 149
1.1 Introducing probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
1.2 An algebraic view on Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
2 Completely positive maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
3 Completely bounded maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4 Dilations of CP and CB maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5 Quantum Dynamical Semigroups and Markov Flows . . . . . . . . . . . . . . . . . . 168
6 Dilations of quantum Markov semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.1 A view on classical dilations of QMS . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.2 Towards quantum dilations of QMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181


Contents

XIII

Quantum Stochastic Differential Equations and Dilation of Completely
Positive Semigroups
Franco Fagnola . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
2 Fock space notation and preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
3 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4 Unitary solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5 Emergence of H-P equations in physical applications . . . . . . . . . . . . . . . . . . 193
6 Cocycle property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8 The left equation: unbounded Gα
β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
9 Dilation of quantum Markov semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10 The left equation with unbounded Gα
β : isometry . . . . . . . . . . . . . . . . . . . . . . 213
11 The right equation with unbounded Fβα . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Index of Volume II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Information about the other two volumes
Contents of Volume I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index of Volume I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents of Volume III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Index of Volume III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

224
228
232
236


List of Contributors

St´ephane Attal
Institut Camille Jordan
Universit´e Claude Bernard Lyon 1
21 av Claude Bernard
69622 Villeurbanne Cedex, France
email:
Franco Fagnola
Politecnico di Milano

Dipartimento di Matematica “F.
Brioschi”
Piazza Leonardo da Vinci 32
20133 Milano Italy
email:

Rolando Rebolledo
Facultad de Matem´aticas
Universidad Cat´olica de Chile
Casilla 306 Santiago 22, Chile
email:

Luc Rey-Bellet
Department of Mathematics and
Statistics
University of Massachusetts
Amherst, MA 01003, USA
email:


Ergodic Properties of Markov Processes
Luc Rey-Bellet
Department of Mathematics and Statistics, University of Massachusetts,
Amherst, MA 01003, USA
e-mail:

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


1

2

Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

3

Markov Processes and Ergodic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

3.1
3.2

Transition probabilities and generators . . . . . . . . . . . . . . . . . . . . . . . .
Stationary Markov processes and Ergodic Theory . . . . . . . . . . . . . . .

4
7

4

Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

5


Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

6

Control Theory and Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

7

Hypoellipticity and Strong-Feller Property . . . . . . . . . . . . . . . . . . . . . . .

26

8

Liapunov Functions and Ergodic Properties . . . . . . . . . . . . . . . . . . . . . .

28

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

1 Introduction
In these notes we discuss Markov processes, in particular stochastic differential equations (SDE) and develop some tools to analyze their long-time behavior. There are
several ways to analyze such properties, and our point of view will be to use systematically Liapunov functions which allow a nice characterization of the ergodic properties. In this we follow, at least in spirit, the excellent book of Meyn and Tweedie [7].

In general a Liapunov function W is a positive function which grows at infinity
and satisfies an inequality involving the generator of the Markov process L: roughly
speaking we have the implications (α and β are positive constants)
1. LW ≤ α + βW implies existence of solutions for all times.
2. LW ≤ −α implies the existence of an invariant measure.
3. LW ≤ α − βW implies exponential convergence to the invariant. measure


2

Luc Rey-Bellet

For (2) and (3), one should assume in addition, for example smoothness of the transition probabilities (i.e the semigroup etL is smoothing) and irreducibility of the
process (ergodicity of the motion). The smoothing property for generator of SDE’s
is naturally linked with hypoellipticity of L and the irreducibility is naturally expressed in terms of control theory.
In sufficiently simple situations one might just guess a Liapunov function. For interesting problems, however, proving the existence of a Liapunov functions requires
both a good guess and a quite substantial understanding of the dynamics. In these
notes we will discuss simple examples only and in the companion lecture [11] we
will apply these techniques to a model of heat conduction in anharmonic lattices. A
simple set of equations that the reader should keep in mind here are the Langevin
equations
dq = pdt ,

dp = (−∇V (q) − λp)dt + 2λT dBt ,
where, p, q ∈ Rn , V (q) is a smooth potential growing at infinity, and Bt is Brownian motion. This equation is a model a particle with Hamiltonian p2 /2 + V (q) in
contact with a thermal reservoir at temperature T . In our lectures on open classical
systems [11] we will show how to derive similar and more general equations from
Hamiltonian dynamics. This simple model already has the feature that the noise is
degenerate by which we mean that the noise is acting only on the p variable. Degeneracy (usually even worse than in these equations) is the rule and not the exception
in mechanical systems interacting with reservoirs.

The notes served as a crash course in stochastic differential equations for an
audience consisting mostly of mathematical physicists. Our goal was to provide the
reader with a short guide to the theory of stochastic differential equations with an
emphasis long-time (ergodic) properties. Some proofs are given here, which will, we
hope, give a flavor of the subject, but many important results are simply mentioned
without proof.
Our list of references is brief and does not do justice to the very large body of
literature on the subject, but simply reflects some ideas we have tried to conveyed in
these lectures. For Brownian motion, stochastic calculus and Markov processes we
recommend the book of Oksendal [10], Kunita [15], Karatzas and Shreve [3] and the
lecture notes of Varadhan [13, 14]. For Liapunov function we recommend the books
of Has’minskii [2] and Meyn and Tweedie [7]. For hypoellipticity and control theory
we recommend the articles of Kliemann [4], Kunita [6], Norris [8], and Stroock and
Varadhan [12] and the book of H¨ormander [1].

2 Stochastic Processes
A stochastic process is a parametrized collection of random variables
{xt (ω)}t∈T

(1)


Ergodic Properties of Markov Processes

3

˜ B, P). In these notes we will take T = R+ or T =
defined on a probability space (Ω,
R. To fix the ideas we will assume that xt takes value in X = Rn equipped with the
Borel σ-algebra, but much of what we will say has a straightforward generalization

˜ the map
to more general state space. For a fixed ω ∈ Ω
t → xt (ω)

(2)

is a path or a realization of the stochastic process, i.e. a random function from T into
Rn . For fixed t ∈ T
(3)
ω → xt (ω)
is a random variable (“the state of the system at time t”). We can also think of xt (ω)
as a function of two variables (t, ω) and it is natural to assume that xt (ω) is jointly
measurable in (t, ω). We may identify each ω with the corresponding path t → xt (ω)
˜ as a subset of the set Ω = (Rn )T of all functions
and so we can always think of Ω
n
from T into R . The σ-algebra B will then contain the σ-algebra F generated by
sets of the form
(4)
{ω ; xt1 (ω) ∈ F1 , · · · , xtn (ω) ∈ Fn } ,
where Fi are Borel sets of Rn . The σ-algebra F is simply the Borel σ-algebra on
Ω equipped with the product topology. From now on we take the point of view that
a stochastic process is a probability measure on the measurable (function) space
(Ω, F).
One can seldom describe explicitly the full probability measure describing a stochastic process. Usually one gives the finite-dimensional distributions of the process
xt which are probability measures µt1 ,··· ,tk on Rnk defined by
µt1 ,··· ,tk (F1 × · · · × Fk ) = P {xt1 ∈ F1 , · · · , xtk ∈ Fk } ,

(5)


where t1 , · · · , tk ∈ T and the Fi are Borel sets of Rn .
A useful fact, known as Kolmogorov Consistency Theorem, allows us to construct a stochastic process given a family of compatible finite-dimensional distributions.
Theorem 2.1. (Kolmogorov Consistency Theorem) For t1 , · · · , tk ∈ T and k ∈ N
let µt1 ,··· ,tk be probability measures on Rnk such that
1. For all permutations σ of {1, · · · , k}
µtσ(1) ,··· ,tσ(k) (F1 × · · · × Fk ) = µt1 ,··· ,tk (Fσ−1 (1) × · · · × Fσ−1 (k) ) .

(6)

2. For all m ∈ N
µt1 ,··· ,tk (F1 × · · · × Fk ) = µt1 ,··· ,tk+m (F1 × · · · × Fk × Rn × · · · × Rn ) . (7)
Then there exists a probability space (Ω, F, P) and a stochastic process xt on Ω
such that
µt1 ,··· ,tk (F1 × · · · × Fk ) = P {xt1 ∈ F1 , · · · , xtk ∈ Fk } ,
for all ti ∈ T and all Borel sets Fi ⊂ Rn .

(8)


4

Luc Rey-Bellet

3 Markov Processes and Ergodic Theory
3.1 Transition probabilities and generators
A Markov process is a stochastic process which satisfies the condition that the future
depends only on the present and not on the past, i.e., for any s1 ≤ · · · ≤ sk ≤ t and
any measurable sets F1 , · · · , Fk , and F
P{xt (ω) ∈ F |xs1 (ω) ∈ F1 , · · · , xsk (ω) ∈ Fk } = P{xt (ω) ∈ F |xsk (ω) ∈ Fk } .
(9)

More formally let Fts be the subalgebra of F generated by all events of the form
{xu (ω) ∈ F } where F is a Borel set and s ≤ u ≤ t. A stochastic process xt is a
Markov process if for all Borel sets F , and all 0 ≤ s ≤ t we have almost surely
P{xt (ω) ∈ F | Fs0 } = P{xt (ω) ∈ F | Fss } = P{xt (ω) ∈ F | x(s, ω)} .

(10)

We will use later an equivalent way of describing the Markov property. Let us consider 3 subsequent times t1 < t2 < t3 . The Markov property means that for any g
bounded measurable
E[g(xt3 )|Ftt22 × Ftt11 ] = E[g(xt3 )|Ftt22 ] .

(11)

The time reversed Markov property that for any bounded measurable function f
E[f (xt1 )|Ftt33 × Ftt22 ] = E[f (xt1 )|Ftt22 ] ,

(12)

which says that the past depends only on the present and not on the future. These two
properties are in fact equivalent, since we will show that they are both equivalent to
the symmetric condition
E[g(xt3 )f (xt1 )|Ftt22 ] = E[g(xt3 )|Ftt22 ]E[f (xt1 )Ftt22 ] ,

(13)

which asserts that given the present, past and future are conditionally independent.
By symmetry it is enough to prove
Lemma 3.1. The relations (11) and (13) are equivalent.
i


Proof. Let us fix f and g and let us set xti = xi and Ftti ≡ Fi , for i = 1, 2, 3. Let
us assume that Eq. (11) holds and denote by gˆ(x2 ) the common value of (11). Then
we have
E[g(x3 )f (x1 )|F2 ] = E [E[g(x3 )f (x1 )|F2 × F1 ] | F2 ]
g (x2 ) | F2 ]
= E [f (x1 )E[g(x3 )|F2 × F1 ] | F2 ] = E [f (x1 )ˆ
= E [f (x1 ) | F2 ] gˆ(x2 ) = E [f (x1 ) | F2 ] E[g(x3 )|F2 ] ,

(14)

which is Eq. (13). Conversely let us assume that Eq. (13) holds and let us denote
by g(x1 , x2 ) and by gˆ(x2 ) the left side and the right side of (11). Let h(x2 ) be any
bounded measurable function. We have


Ergodic Properties of Markov Processes

5

E [f (x1 )h(x2 )g(x1 , x2 )] = E [f (x1 )h(x2 )E[g(x3 )|F2 × F1 ]]
= E [f (x1 )h(x2 )g(x3 )] = E [h(x2 )E[f (x1 )g(x3 ) | F2 ]]
= E [h(x2 ) (E[g(x3 ) | F2 ]) (E[f (x1 ) | F2 ])]
g (x2 )E[f (x1 ) | F2 ]] = E [f (x1 )h(x2 )ˆ
g (x2 )] .
= E [h(x2 )ˆ

(15)

Since f and h are arbitrary this implies that g(x1 , x2 ) = gˆ(x2 ) a.s.
A natural way to construct a Markov process is via a transition probability function

t ∈ T , x ∈ Rn , F a Borel set ,
(16)
Pt (x, F ) ,
where (t, x) → Pt (x, F ) is a measurable function for any Borel set F and F →
Pt (x, F ) is a probability measure on Rn for all (t, x). One defines
P{xt (ω) ∈ F | Fs0 } = P {xt (ω) ∈ F | xs (ω)} = Pt−s (xs (ω), F ) .

(17)

The finite dimensional distribution for a Markov process starting at x at time 0 are
then given by
P{xt1 ∈ F } = Pt1 (x, F1 ) ,
P{xt1 ∈ F1 , xt2 ∈ F2 ) =

Pt1 (x, dx1 )Pt2 −t1 (x1 , F2 ) ,

(18)

F1

..
.
P{xt1 ∈ F1 , ··, xtk ∈ Fk } =

···
F1

Pt1 (x, dx1 ) · ·Ptk −tk−1 (xk−1 , Fk ) .
Fk−1


By the Kolmogorov Consistency Theorem this defines a stochastic process xt for
which P{x0 = x} = 1. We denote Px and Ex the corresponding probability distribution and expectation.
One can also give an initial distribution π, where π is a probability measure on
Rn which describe the initial state of the system at t = 0. In this case the finite
dimensional probability distributions have the form
···
Rn

F1

π(dx)Pt (x, dx1 )Pt2 −t1 (x1 , dx2 ) · · · Ptk −tk−1 (xk−1 , Fk ) , (19)
Fk−1

and we denote Pπ and Eπ the corresponding probability distribution expectation.
Remark 3.2. We have considered here only time homogeneous process, i.e., processes
for which Px {xt (ω) ∈ F | xs (ω)} depends only on t − s. This can generalized this
by considering transition functions P (t, s, x, A).
The following property is a immediate consequence of the fact that the future depends only on the present and not on the past.


6

Luc Rey-Bellet

Lemma 3.3. (Chapman-Kolmogorov equation) For 0 ≤ s ≤ t we have
Pt (x, A) =

Ps (x, dy)Pt−s (y, A) .

(20)


Rn

Proof. : We have
Pt (x, A) = P {x0 = x, xt ∈ A} = P {x0 = x, xs ∈ Rn , xt ∈ A}
Ps (x, dy)Pt−s (y, A) .

=

(21)

Rn

For a measurable function f (x), x ∈ Rn , we have
Ex [f (xt )] =

Pt (x, dy)f (y) .

(22)

Rn

and we can associate to a transition probability a linear operator acting on measurable
function by
Tt f (x) =

Pt (x, dy)f (y) = Ex [f (xt )] .

(23)


Rn

From the Chapman-Kolmogorov equation it follows immediately that Tt is a semigroup: for all s, t ≥ 0 we have
Tt+s = Tt Ts .

(24)

We have also a dual semigroup acting on σ-finite measures on Rn :
St µ(A) =

µ(dx)Pt (x, A) .

(25)

Rn

The semigroup Tt has the following properties which are easy to verify.
1. Tt preserves the constant, if 1(x) denotes the constant function then
Tt 1(x) = 1(x) .

(26)

2. Tt is positive in the sense that
Tt f (x) ≥ 0

if

f (x) ≥ 0 .

(27)


3. Tt is a contraction semigroup on L∞ (dx), the set of bounded measurable functions equipped with the sup-norm · ∞ .
Tt f



= sup |
x

Pt (x, dy)f (y)|
Rn

≤ sup |f (y)| sup
y

x

)Rn Pt (x, dy) = f



.

(28)


Ergodic Properties of Markov Processes

7


The spectral properties of the semigroup Tt are important to analyze the longtime (ergodic) properties of the Markov process xt . In order to use method from
functional analysis one needs to define these semigroups on function spaces which
are more amenable to analysis than the space of measurable functions.
We say that the semigroup Tt is weak-Feller if it maps the set of bounded continuous function C b (Rn ) into itself. If the transition probabilities Pt (x, A) are stochastically continuous, i.e., if limt→0 Pt (x, B (x)) = 1 for any > 0 (B (x) is
the -neighborhood of x) then it is not difficult to show that limt→0 Tt F (x) = f (x)
for any f (x) ∈ C b (Rn ) (details are left to th reader) and then Tt is a contraction
semigroup on C b (Rn ).
We say that the semigroup Tt is strong-Feller if it maps bounded measurable
function into continuous function. This reflects the fact that T t has a “smoothing
effect”. A way to show the strong-Feller property is to establish that the transition
probabilities Pt (x, A) have a density
Pt (x, dy) = pt (x, y)dy ,

(29)

where pt (x, y) is a sufficiently regular (e.g. continuous or differentiable) function of
x, y and maybe also of t. We will discuss some tools to prove such properties in
Section 7.
If Tt is weak-feller we define the generator L of Tt by
Lf (x) = lim

t→0

Tt f (x) − f (x)
.
t

(30)

The domain of definition of L is set of all f for which the limit (30) exists for all x.

3.2 Stationary Markov processes and Ergodic Theory
We say that a stochastic process is stationary if the finite dimensional distributions
P{xt1 +h ∈ F1 , · · · , xtk +h ∈ Fk }

(31)

are independent of h, for all t1 < · · · < tk and all measurable Fi . If the process is
Markovian with initial distribution π(dx) then (take k = 1)
π(dx)Pt (x, F ) = St π(F )

(32)

Rn

must be independent of t for any measurable F , i.e., we must have
St π = π ,

(33)

for all t ≥ 0. The condition (33) alone implies stationarity since it implies that
Pπ {xt1 +h ∈ F1 , · · · , xtk +h ∈ Fk }
···

=
Rn

F1

···


=
F1

π(dx)Pt1 +h (x, dx1 ) · · · Ptk −tk−1 (xk−1 , Fk ) ,
Fk−1

π(dx)Pt1 (x, dx1 ) · · · Ptk −tk−1 (xk−1 , Fk ) ,
Fk−1

(34)


8

Luc Rey-Bellet

which is independent of h.
Intuitively stationary distribution describe the long-time behavior of xt . Indeed
let us suppose that the distribution of xt with initial distribution µ converges in some
sense to a distribution γ = γµ (a priori γ may depend on the initial distribution µ),
i.e.,
(35)
lim Pµ {xt ∈ F } = γµ (F ) ,
t→∞

for all measurable F . Then we have, formally,
γµ (F ) = lim

t→∞


µ(dx)Pt (x, F )
Rn

µ(dx)

= lim

t→∞

=

Rn

γµ (dy)

Pt−s (x, dy)Ps (y, F )
Rn

Ps (y, F ) = Ss γµ (F ) ,

(36)

i.e., γµ is a stationary distribution.
In order to make this more precise we recall some concepts and results from
ergodic theory. Let (X, F, µ) be a probability space and φt , t ∈ R a group of measurable transformations of X. We say that φt is measure preserving if µ(φ−t (A)) =
µ(A) for all t ∈ R and all A ∈ F. We also say that µ is an invariant measure for φt .
A basic result in ergodic theory is the pointwise Birkhoff ergodic theorem.
Theorem 3.4. (Birkhoff Ergodic Theorem) Let φt be a group of measure preserving transformations of (X, F, µ). Then for any f ∈ L1 (µ) the limit
1
t→∞ t


t

lim

f (φs (x)) ds = f ∗ (x)

(37)

0

exists µ-a.s. The limit f ∗ (x) is φt invariant, f (φt (x)) = f (x) for all t ∈ R, and
f dµ = X f ∗ dµ.
X
The group of transformation φt is said to be ergodic if f ∗ (x) is constant µ-a.s.
and in that case f ∗ (x) = f dµ, µ-a.s. Ergodicity can be also expressed in terms
of the σ-field of invariant subsets. Let G ⊂ F be the σ-field given by G = {A ∈
F : φ−t (A) = A for all t}. Then in Theorem 3.4 f ∗ (x) is given by the conditional
expectation
(38)
f ∗ (x) = E[f |G] .
The ergodicity of φt is equivalent to the statement that G is the trivial σ-field, i.e., if
A ∈ G then µ(A) = 0 or 1.
Given a measurable group of transformation φt of a measurable space, let us
denote by M the set of invariant measure. It is easy to see that M is a convex set
and we have
Proposition 3.5. The probability measure µ is an extreme point of M if and only if
µ is ergodic.



Ergodic Properties of Markov Processes

9

Proof. Let us suppose that µ is not extremal. Then there exists µ1 , µ2 ∈ M with
µ1 = µ2 and 0 < a < 1 such that µ = aµ1 + (1 − a)µ2 . We claim that µ is not
ergodic. It µ were ergodic then µ(A) = 0 or 1 for all A ∈ G. If µ(A) = 0 or 1,
then µ1 (A) = µ2 (A) = 0 or µ1 (A) = µ2 (A) = 1. Therefore µ1 and µ2 agree on
the σ-field G. Let now f be a bounded measurable function and let us consider the
function
1 t
f (φs x) ds ,
(39)
f ∗ (x) = lim
t→∞ t 0
which is defined on the set E where the limit exists. By the ergodic theorem µ1 (E) =
µ2 (E) = 1 and f ∗ is measurable with respect to G. We have
f ∗ dµi ,

f dµi =
E

i = 1, 2 .

(40)

E

Since µ1 = µ2 on G, f ∗ is G-measurable, and µi (E) = 1 for i = 1, 2, we see that
f dµ1 =

X

f dµ2 .

(41)

X

Since f is arbitrary this implies that µ1 = µ2 and this is a contradiction.
Conversely if µ is not ergodic, then there exists A ∈ G with 0 < µ(A) < 1. Let
us define
µ(A ∩ B)
µ(Ac ∩ B)
µ1 (B) =
, µ2 (B) =
.
(42)
µ(A)
µ(Ac )
Since A ∈ G, it follows that µi are invariant and that µ = µ(A)µ1 + µ(Ac )µ2 . Thus
µ is not an extreme point.
A stronger property than ergodicity is the property of mixing . In order to formulate it we first note that we have
Lemma 3.6. µ is ergodic if and only if
1
t→∞ t

t

µ(φ−s (A) ∩ B) = µ(A)µ(B) ,


lim

(43)

0

for all A, B ∈ F
Proof. If µ is ergodic, let f = χA be the characteristic function of A in the ergodic theorem, multiply by the characteristic function of B and use the bounded
convergence theorem to show that Eq. (43) holds. Conversely let E ∈ G and set
A = B = E in Eq. (43). This shows that µ(E) = µ(E)2 and therefore µ(E) = 0
or 1.
We say that an invariant measure µ is mixing if we have
lim µ(φ−t (A) ∩ B) = µ(A)µ(B)

t→∞

(44)


10

Luc Rey-Bellet

for all A, B ∈ F, i.e., we have convergence in Eq. (44) instead of convergence in the
sense of Cesaro in Eq. (43).
Mixing can also be expressed in terms of the triviality of a suitable σ-algebra.
We define the remote future σ-field, denoted F∞ , by
F∞ =

φ−t (F) .


(45)

t≥0

Notice that a set A ∈ F∞ if and only if for every t there exists a set At ∈ F such
that A = φ−t At . Therefore the σ-field of invariant subsets G is a sub- σ-field of F ∞ .
We have
Lemma 3.7. µ is mixing if and only if the σ-field F∞ is trivial.
Proof. Let us assume first that F∞ is not trivial. There exists a set A ∈ F∞ with
0 < µ(A) < 1 or µ(A)2 = µ(A) and for any t there exists a set At such that
A = φ−t (At ). If µ were mixing we would have limt→∞ µ(φ−t (A) ∩ A) = µ(A)2 .
On the other hand
µ(φ−t (A) ∩ A) = µ(φ−t (A) ∩ φ−t (At )) = µ(A ∩ At )

(46)

and this converge to µ(A) as t → ∞. This is a contradiction.
Let us assume that F∞ is trivial. We have
µ(φ−t (A) ∩ B) − µ(A)µ(B) = µ(B | φ−t (A))µ(φ−t (A)) − µ(A)µ(B)
= (µ(B | φ−t (A)) − µ(B)) µ(A)

(47)

The triviality of F∞ implies that limt→∞ µ(B | φ−t (A)) = µ(B).
Given a stationary Markov process with a stationary distribution π one constructs a stationary Markov process with probability measure Pπ . We can extend
this process in a natural way on −∞ < t < ∞. The marginal of Pπ at any time t is
π. Let Θs denote the shift transformation on Ω given by Θs (xt (ω)) = xt+s (ω). The
stationarity of the Markov process means that Θs is a measure preserving transformation of (Ω, F, Pπ ).
In general given transition probabilities Pt (x, dy) we can have several station˜

ary distributions π and several corresponding stationary Markov processes. Let M
denote the set of stationary distributions for Pt (x, dy), i.e.,
˜ = {π : St π = π} .
M

(48)

˜ is a convex set of probability measures. We have
Clearly M
Theorem 3.8. A stationary distribution π for the Markov process with transition
˜ if and only if Pπ is ergodic , i.e.,
probabilities Pt (x, dy) is an extremal point of M
an extremal point in the set of all invariant measures for the shift Θt .


Ergodic Properties of Markov Processes

11

Proof. If Pπ is ergodic then, by the linearity of the map π → Pπ , π must be an
˜
extreme point of M.
To prove the converse let E be a nontrivial set in the σ-field of invariant subsets.
Let F∞ denote the far remote future σ-field and F −∞ the far remote past σ-field
which is defined similarly. Let also F00 be the σ-field generated by x0 (this is the
present). An invariant set is both in the remote future F∞ as well as in the remote
past F∞ . By Lemma 3.1 the past and the future are conditionally independent given
the present. Therefore
Pπ [E | F00 ] = Pπ [E ∩ E | F00 ] = Pπ [E | F00 ]Pπ [E | F00 ] .


(49)

and therefore it must be equal either to 0 or 1. This implies that for any invariant set E
there exists a measurable set A ⊂ Rn such that E = {ω : xt (ω) ∈ A for all t ∈ R}
up to a set of Pπ measure 0. If the Markov process start in A or Ac it does not ever
leaves it. This means that 0 < π(A) < 1 and Pt (x, Ac ) = 0 for π a.e. x ∈ A and
Pt (x, A) = 0 for π a.e. x ∈ Ac . This implies that π is not extremal.
Remark 3.9. Theorem 3.8 describes completely the structure of the σ-field of invariant subsets for a stationary Markov process with transition probabilities Pt (x, dy)
and stationary distribution π. Suppose that the state space can be partitioned non
trivially, i.e., there exists a set A with 0 < π(A) < 1 such that Pt (x, A) = 1 for π
almost every x ∈ A and for any t > 0 and Pt (x, Ac ) = 1 for π almost every x ∈ Ac
and for any t > 0. Then the event
E = {ω ; xt (ω) ∈ A for all t ∈ R}

(50)

is a nontrivial set in the invariant σ-field. What we have proved is just the converse
the statement.
We can therefore look at the extremal points of the sets of all stationary distribution, St π = π. Since they correspond to ergodic stationary processes, it is natural to
call them ergodic stationary distributions. If π is ergodic then, by the ergodic theorem
we have
1 t
F (θs (x· (ω)) ds = Eπ [F (x· (ω))] .
(51)
lim
t→∞ t 0
for Pπ almost all ω. If F (x· ) = f (x0 ) depends only on the state at time 0 and is
bounded and measurable then we have
1
t→∞ t


t

f (xs (ω)) ds =

lim

f (x)dπ(x) .

(52)

0

for π almost all x and almost all ω. Integrating over ω gives that
1
t→∞ t

t

lim

for π almost all x.

Ts f (x) ds =
0

f (x)dπ(x) .

(53)



12

Luc Rey-Bellet

The property of mixing is implied by the convergence of the probability measure
Pt (x, dy) to µ(dy). In which sense we have convergence depends on the problem
under consideration, and various topologies can be used. We consider here the total
variation norm (and variants of it later): let µ be a signed measure on Rn , the total
variation norm µ is defined as
µ = sup |µ(f )| = sup µ(A) − inf µ(A) .
|f |≤1

A

A

(54)

Clearly convergence in total variation norm implies weak convergence.
Let us assume that there exists a stationary distribution π for the Markov process
with transition probabilities Pt (x, dy) and that
lim Pt (x, ·) − π = 0 ,

(55)

t→∞

for all x. The condition (55) implies mixing. By a simple density argument it is
t

. Since Θ−t (Fs−∞ ) = Fs−∞
we
enough to show mixing for E ∈ Fs−∞ and F ∈ F∞
−t
simply have to show that as k = t − s goes to ∞, µ(E ∩ F ) converges to µ(E)µ(F ).
We have
µ(E)µ(F ) =
E

Rn

E

Rn

µ(E ∩ F ) =

Px (Θ−t1 F )dπ(x) dPπ (ω) ,
Px (Θ−t1 F )Pk (xs2 (ω), dx) dPπ ,

(56)

Px (Θ−t1 F ) (Pk (xs2 (ω), dx) − π(dx)) dPπ ,

(57)

and therefore
µ(E ∩ F ) − µ(E)µ(F )
=
E


Rn

from which we conclude mixing.

4 Brownian Motion
An important example of a Markov process is the Brownian motion. We will take as
a initial distribution the delta mass at x, i.e., the process starts at x. The transition
probability function of the process has the density pt (x, y) given by
pt (x, y) =

1
(x − y)2
exp

2t
(2πt)n/2

.

(58)

Then for 0 ≤ t1 < t2 < · · · < tk and for Borel sets Fi we define the finite dimensional distributions by
νt1 ,...,tx (F1 × · · · × Fx )
=

pt1 (x, x1 )pt2 −t1 (x1 , x2 ) · · · ptx −tx−1 (xx−1 , xx )dx1 · · · dxx ,

(59)



Ergodic Properties of Markov Processes

13

with the convention
p0 (x, x1 ) = δx (x1 ) .

(60)

By Kolmogorov Consistency Theorem this defines a stochastic process which we
denote by Bt with probability distribution Px and expectation Ex . This process is
the Brownian motion starting at x.
We list now some properties of the Brownian motion. Most proofs are left as
exercises (use your knowledge of Gaussian random variables).
(a) The Brownian motion is a Gaussian process, i.e., for any k ≥ 1, the random
variable Z ≡ (Bt1 , · · · , Btk ) is a Rnk -valued normal random variable. This is clear
since the density of the finite dimensional distribution (59) is a product of Gaussian
(the initial distribution is a degenerate Gaussian). To compute the mean and variance
consider the characteristic function which is given for α ∈ Rnk by
1
Ex exp(iαT Z) = exp − αT Cα + iαT M
2

,

(61)

where
M = Ex [Z] = (x, · · · , x) ,


(62)

is the mean of Z and the covariance matrix Cij = Ex [Zi Zj ] is given by


t1 In t 1 In · · · t1 In
⎜ t1 In t2 In · · · t2 In ⎟


C = ⎜ .
..
.. ⎟ ,
⎝ ..
. ··· . ⎠

(63)

t1 In t2 In · · · tk In

where In is n by n identity matrix. We thus find
Ex [Bt ] = x ,
Ex [(Bt − x)(Bs − x)] = n min(t, s) ,

(64)
(65)

Ex [(Bt − Bs )2 ] = n|t − s| ,
(1)


(n)

(66)
(j)

(b) If Bt = (Bt , · · · , Bt ) is a m-dimensional Brownian motion, Bt
pendent one-dimensional Brownian motions.

are inde-

(c) The Brownian motion Bt has independent increments , i.e., for 0 ≤ t1 < t2 <
· · · < tk the random variables Bt1 , Bt2 − Bt1 , · · · Btk − Btk−1 are independent.
This easy to verify since for Gaussian random variables it is enough to show that the
correlation Ex [(Bti − Bti−1 )(Btj − Btj−1 )] vanishes.
(d) The Brownian motion has stationary increments , i.e., Bt+h − Bt has a distribution which is independent of t. Since it is Gaussian it suffices to check Ex [Bt+h −
Bt ] = 0 and Ex [(Bt+h − Bt )2 ] is independent of t.
(d) A stochastic process x
˜t is called a modification of xt if P {xt = x
˜t } holds for all
t. Usually one does not distinguish between a stochastic process and its modification.


×