Tải bản đầy đủ (.pdf) (419 trang)

Malliavin Calculus for L´evy Processes with Applications to Finance ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.47 MB, 419 trang )

Giulia Di Nunno · Bernt Øksendal
Frank Proske
Malliavin Calculus
for L
´
evy Processes
with Applications
to Finance
ABC
Giulia Di Nunno
Bernt Øksendal
Frank Proske
Department of Mathematics
University of Oslo
0316 Oslo
Blindern
Norway



ISBN 978-3-540-78571-2 e-ISBN 978-3-540-78572-9
Library of Congress Control Number: 2008933368
Mathematics Subject Classification (2000): 60H05, 60H07, 60H40, 91B28, 93E20, 60G51, 60G57
c
 2009 Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad-
casting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law


of September 9, 1965, in its current version, and permission for use must always be obtained from
Springer. Violations are liable to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant pro-
tective laws and regulations and therefore free for general use.
Cover design
: WMXDesign GmbH, Heidelberg
The cover design is based on a graph provided by F.E. Benth, M. Groth, and O. Wallin.
Printed on acid-free paper
9 8 7 6 5 4 3 2 1
springer.com
To Christian. To my parents.
G.D.N.
To Eva.
B.Ø.
To Simone, Paulina and Siegfried.
F.P.
Preface
There are already several excellent books on Malliavin calculus. However,
most of them deal only with the theory of Malliavin calculus for Brownian
motion, with [35] as an honorable exception. Moreover, most of them discuss
only the application to regularity results for solutions of SDEs, as this was the
original motivation when Paul Malliavin introduced the infinite-dimensional
calculus in 1978 in [157]. In the recent years, Malliavin calculus has found
many applications in stochastic control and within finance. At the same time,
L´evy processes have become important in financial modeling. In view of this,
we have seen the need for a book that deals with Malliavin calculus for L´evy
processes in general, not just Brownian motion, and that presents some of the
most important and recent applications to finance.
It is the purpose of this book to try to fill this need. In this monograph

we present a general Malliavin calculus for L´evy processes, covering both the
Brownian motion case and the pure jump martingale case via Poisson random
measures, and also some combination of the two. We also present many of the
recent applications to finance, including the following:
• The Clark–Ocone theorem and hedging formulae
• Minimal variance hedging in incomplete markets
• Sensitivity analysis results and efficient computation of the “greeks”
• Optimal portfolio with partial information
• Optimal portfolio in an anticipating environment
• Optimal consumption in a general information setting
• Insider trading
To be able to handle these applications, we develop a general theory of
anticipative stochastic calculus for L´evy processes involving the Malliavin
derivative, the Skorohod integral, the forward integral, which were originally
introduced for the Brownian setting only. We dedicate some chapters to the
generalization of our results to the white noise framework, which often turns
out to be a suitable setting for the theory. Moreover, this enables us to prove
VII
VIII Preface
results that are general enough for the financial applications, for example, the
generalized Clark–Ocone theorem.
This book is based on a series of courses that we have given in different
years and to different audiences. The first one was given at the Norwegian
School of Economics and Business Administration (NHH) in Bergen in 1996,
at that time about Brownian motion only. Other courses were held later, every
time including more updated material. In particular, we mention the courses
given at the Department of Mathematics and at the Center of Mathematics for
Applications (CMA) at the University of Oslo and also the intensive or com-
pact courses presented at the University of Ulm in July 2006, at the University
of Cape Town in December 2006, at the Indian Institute of Science (IIS) in

Bangalore in January 2007, and at the Nanyang Technological University in
Singapore in January 2008.
At all these occasions we met engaged students and attentive readers. We
thank all of them for their active participation to the classes and their feed-
back. Our work has benefitted from the collaboration and useful comments
from many people, including Fred Espen Benth, Delphine David, Inga Baard-
shaug Eide, Xavier Gabaix, Martin Groth, Yaozhong Hu, Asma Khedher, Paul
Kettler, An Ta Thi Kieu, Jørgen Sjaastad, Thilo Meyer-Brandis, Farai Julius
Mhlanga, Yeliz Yolcu Okur, Olivier Menoukeu Pamen, Ulrich Rieder, Goncalo
Reiss, Alexander Sokol, Agn`es Sulem, Olli Wallin, Diane Wilcox, Frank
Wittemann, Mihail Zervos, Tusheng Zhang, and Xunyu Zhou. We thank them
all for their help. Our special thanks go to Paul Malliavin for the inspiration
and continuous encouragement he has given us throughout the time we have
worked on this book. We also acknowledge with gratitude the technical sup-
port with computers of the Drift-IT at the Department of Mathematics at the
University of Oslo.
Oslo, Giulia Di Nunno
June 2008. Bernt Øksendal
Frank Proske
Contents
Introduction 1
Part I The Continuous Case: Brownian Motion
1 The Wiener–Itˆo Chaos Expansion 7
1.1 Iterated ItˆoIntegrals 7
1.2 The Wiener–ItˆoChaosExpansion 11
1.3 Exercises 16
2 The Skorohod Integral 19
2.1 TheSkorohodIntegral 19
2.2 Some Basic Properties of the Skorohod Integral . . . . . . . . . . . . . 22
2.3 The Skorohod Integral as an Extension of the Itˆo Integral . . . . 23

2.4 Exercises 25
3 Malliavin Derivative via Chaos Expansion 27
3.1 The Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Computation and Properties of the Malliavin Derivative . . . . . . 29
3.2.1 Chain Rules for Malliavin Derivative . . . . . . . . . . . . . . . . . 29
3.2.2 Malliavin Derivative and Conditional Expectation . . . . . 30
3.3 Malliavin Derivative and Skorohod Integral . . . . . . . . . . . . . . . . . 34
3.3.1 Skorohod Integral as Adjoint Operator to the
Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 An Integration by Parts Formula and Closability
oftheSkorohodIntegral 36
3.3.3 A Fundamental Theorem of Calculus . . . . . . . . . . . . . . . . 37
3.4 Exercises 40
4 Integral Representations and the Clark–Ocone Formula 43
4.1 The Clark–Ocone Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 The Clark–Ocone Formula under Change of Measure . . . . . . . . . 45
IX
X Contents
4.3 Application to Finance: Portfolio Selection . . . . . . . . . . . . . . . . . . 48
4.4 Application to Sensitivity Analysis and Computation
ofthe“Greeks”inFinance 54
4.5 Exercises 59
5 White Noise, the Wick Product, and Stochastic
Integration 63
5.1 White Noise Probability Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 The Wiener–ItˆoChaosExpansionRevisited 65
5.3 The Wick Product and the Hermite Transform . . . . . . . . . . . . . . 70
5.3.1 Some Basic Properties of the Wick Product . . . . . . . . . . . 72
5.3.2 Hermite Transform and Characterization
Theorem for (S)


73
5.3.3 The Spaces G and G

76
5.3.4 The Wick Product in Terms of Iterated Itˆo Integrals . . . 78
5.3.5 Wick Products and Skorohod Integration . . . . . . . . . . . . . 79
5.4 Exercises 83
6 The Hida–Malliavin Derivative on the Space Ω = S

(R) 85
6.1 A New Definition of the Stochastic Gradient and a Generalized
ChainRule 85
6.2 Calculus of the Hida–Malliavin Derivative
andSkorohodIntegral 89
6.2.1 Wick Product vs. Ordinary Product . . . . . . . . . . . . . . . . . 89
6.2.2 Closability of the Hida–Malliavin Derivative . . . . . . . . . . 90
6.2.3 Wick Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.4 Integration by Parts, Duality Formula,
andSkorohodIsometry 93
6.3 Conditional Expectation on (S)

95
6.4 Conditional Expectation on G

98
6.5 A Generalized Clark–Ocone Theorem . . . . . . . . . . . . . . . . . . . . . . 99
6.6 Exercises 107
7 The Donsker Delta Function and Applications 109
7.1 Motivation: An Application of the Donsker Delta Function

toHedging 109
7.2 The Donsker Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.3 The Multidimensional Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Exercises 127
8 The Forward Integral and Applications 129
8.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.2 TheForwardIntegral 132
8.3 ItˆoFormula forForward Integrals 135
Contents XI
8.4 Relation Between the Forward Integral
andtheSkorohodIntegral 138
8.5 ItˆoFormula forSkorohodIntegrals 140
8.6 Application to Insider Trading Modeling . . . . . . . . . . . . . . . . . . . 142
8.6.1 Markets with No Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.6.2 Markets with Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.7 Exercises 154
Part II The Discontinuous Case: Pure Jump L´evy Processes
9 A Short Introduction to L´evy Processes 159
9.1 Basics on L´evyProcesses 159
9.2 The ItˆoFormula 163
9.3 The Itˆo Representation Theorem for Pure Jump
L´evyProcesses 166
9.4 Application to Finance: Replicability . . . . . . . . . . . . . . . . . . . . . . . 169
9.5 Exercises 171
10 The Wiener–Itˆo Chaos Expansion 175
10.1 Iterated Itˆo Integrals 175
10.2 The Wiener–Itˆo ChaosExpansion 176
10.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
11 Skorohod Integrals 181
11.1 The Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

11.2 The Skorohod Integral as an Extension of the Itˆo Integral . . . . 182
11.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
12 The Malliavin Derivative 185
12.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.2 Chain Rules for Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . 188
12.3 Malliavin Derivative and Skorohod Integral . . . . . . . . . . . . . . . . . 190
12.3.1 Skorohod Integral as Adjoint Operator
to the Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.3.2 Integration by Parts and Closability
oftheSkorohodIntegral 190
12.3.3 Fundamental Theorem of Calculus . . . . . . . . . . . . . . . . . . . 192
12.4 The Clark–Ocone Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
12.5 A Combination of Gaussian and Pure Jump L´evy Noises . . . . . 195
12.6 Application to Minimal Variance Hedging with Partial
Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
12.7 Computation of “Greeks” in the Case of Jump Diffusions . . . . . 204
12.7.1 The Barndorff–Nielsen and Shephard Model . . . . . . . . . . 205
12.7.2 Malliavin Weights for “Greeks” . . . . . . . . . . . . . . . . . . . . . 207
12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
XII Contents
13 L´evy White Noise and Stochastic Distributions 213
13.1 The White Noise Probability Space . . . . . . . . . . . . . . . . . . . . . . . . 213
13.2 An Alternative Chaos Expansion and the White Noise . . . . . . . 214
13.3 The Wick Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
13.3.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
13.3.2 Wick Product and Skorohod Integral . . . . . . . . . . . . . . . . 222
13.3.3 Wick Product vs. Ordinary Product . . . . . . . . . . . . . . . . . 225
13.3.4 L´evy–Hermite Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.4 Spaces of Smooth and Generalized Random Variables:
G and G


227
13.5 The Malliavin Derivative on G

228
13.6 A Generalization of the Clark–Ocone Theorem . . . . . . . . . . . . . . 230
13.7 A Combination of Gaussian and Pure Jump L´evy Noises
in the White Noise Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
13.8 Generalized Chain Rules for the Malliavin Derivative . . . . . . . . 237
13.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14 The Donsker Delta Function of a L´evy Process
and Applications 241
14.1 The Donsker Delta Function of a Pure Jump L´evy Process . . . . 242
14.2 An Explicit Formula for the Donsker Delta Function . . . . . . . . . 242
14.3 Chaos Expansion of Local Time for L´evyProcesses 247
14.4 Application to Hedging in Incomplete Markets . . . . . . . . . . . . . . 253
14.5 A Sensitivity Result for Jump Diffusions . . . . . . . . . . . . . . . . . . . 256
14.5.1 A Representation Theorem for Functions
of a Class of Jump Diffusions . . . . . . . . . . . . . . . . . . . . . . . 256
14.5.2 Application: Computation of the “Greeks” . . . . . . . . . . . . 261
14.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
15 The Forward Integral 265
15.1 Definition of Forward Integral and its Relation
with the Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.2 Itˆo Formula for Forward and Skorohod Integrals . . . . . . . . . . . . . 268
15.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
16 Applications to Stochastic Control: Partial
and Inside Information 273
16.1 The Importance of Information in Portfolio Optimization . . . . . 273
16.2 Optimal Portfolio Problem under Partial Information . . . . . . . . 274

16.2.1 Formalization of the Optimization Problem:
General Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
16.2.2 Characterization of an Optimal Portfolio
Under Partial Information . . . . . . . . . . . . . . . . . . . . . . . . . . 276
16.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
16.3 Optimal Portfolio under Partial Information
in an Anticipating Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Contents XIII
16.3.1 The Continuous Case: Logarithmic Utility . . . . . . . . . . . . 289
16.3.2 The Pure Jump Case: Logarithmic Utility . . . . . . . . . . . . 293
16.4 A Universal Optimal Consumption Rate for an Insider . . . . . . . 298
16.4.1 Formalization of a General Optimal
Consumption Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
16.4.2 Characterization of an Optimal Consumption Rate . . . . 301
16.4.3 Optimal Consumption and Portfolio . . . . . . . . . . . . . . . . . 305
16.5 Optimal Portfolio Problem under Inside Information . . . . . . . . . 307
16.5.1 Formalization of the Optimization Problem:
General Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
16.5.2 Characterization of an Optimal Portfolio
under Inside Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
16.5.3 Examples: General Utility and Enlargement
of Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
16.6 Optimal Portfolio Problem under Inside Information:
Logarithmic Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
16.6.1 The Pure Jump Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
16.6.2 A Mixed Market Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
16.6.3 Examples: Enlargement of Filtration . . . . . . . . . . . . . . . . . 324
16.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
17 Regularity of Solutions of SDEs Driven
by L´evy Processes 333

17.1 The Pure Jump Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
17.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
17.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
18 Absolute Continuity of Probability Laws 341
18.1 Existence of Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
18.2 Smooth Densities of Solutions to SDE’s Driven
by L´evyProcesses 345
18.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Appendix A: Malliavin Calculus on the Wiener Space 349
A.1 Preliminary Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
A.2 Wiener Space, Cameron–Martin Space,
and Stochastic Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
A.3 Malliavin Derivative via Chaos Expansions . . . . . . . . . . . . . . . . . 359
Solutions 363
References 395
Notation and Symbols 407
Index 411
Introduction
The mathematical theory now known as Malliavin calculus was first intro-
duced by Paul Malliavin in [157] as an infinite-dimensional integration by
parts technique. The purpose of this calculus was to prove the results about
the smoothness of densities of solutions of stochastic differential equations
driven by Brownian motion. For several years this was the only known appli-
cation. Therefore, since this theory was considered quite complicated by many,
Malliavin calculus remained a relatively unknown theory also among mathe-
maticians for some time. Many mathematicians simply considered the theory
as too difficult when compared with the results it produced. Moreover, to a
large extent, these results could also be obtained by using H¨ormander’s earlier
theory on hypoelliptic operators. See also, for example, [20, 113, 224, 229].
This was the situation until 1984, when Ocone in [172] obtained an explicit

interpretation of the Clark representation formula [46, 47] in terms of the
Malliavin derivative. This remarkable result later became known as the Clark–
Ocone formula. Sometimes also called Clark–Haussmann–Ocone formula in
view of the contribution of Haussmann in 1979, see [97]. In 1991, Ocone and
Karatzas [173] applied this result to finance. They proved that the Clark–
Ocone formula can be used to obtain explicit formulae for replicating portfolios
of contingent claims in complete markets.
Since then, new literature helped to distribute these results to a wider
audience, both among mathematicians and researchers in finance. See, for
example, the monographs [53, 160, 168, 211] and the introductory lecture
notes [177]; see also [205].
The next breakthrough came in 1999, when Fourni´e et al. [80] obtained
numerically tractable formulae for the computation of the so-called greeks in
finance, also known as parameters of sensitivity. In the recent years, many
new applications of the Malliavin calculus have been found, including partial
information optimal control, insider trading and, more generally, anticipative
stochastic calculus.
At the same time Malliavin calculus was extended from the original setting
of Brownian motion to more general L´evy processes. This extensions were at
G.D. Nunno et al., Malliavin Calculus for L´evy Processes with Applications 1
to Finance,
c
 Springer-Verlag Berlin Heidelberg 2009
2 Introduction
first motivated by and taylored to the original application within the study
of smoothness of densities (see e.g. [12, 35, 37, 38, 44, 140, 141, 142, 162,
188, 189, 217, 218]) and then developed largely targeting the applications to
finance, where L´evy processes based models are now widely used (see, e.g.,
[25, 29, 64, 69, 147, 170, 180]). Within this last direction, some extension to
random fields of L´evy type has also been developed, see, for example, [61, 62].

Other extension of Malliavin calculus within quantum probability have also
appeared, see, for example, [83, 84].
One way of interpreting the Malliavin derivative of a given random vari-
able F = F(ω), ω ∈ Ω, on the given probability space (Ω, F,P) is to regard
it as a derivative with respect to the random parameter ω. For this to make
sense, one needs some mathematical structure on the space Ω. In the original
approach used by Malliavin, for the Brownian motion case, Ω is represented
as the Wiener space C
0
([0,T]) of continuous functions ω :[0,T] −→ R with
ω(0) = 0, equipped with the uniform topology. In this book we prefer to use
the representation of Hida [98], namely to represent Ω as the space S

of tem-
pered distributions ω : S−→R, where S is the Schwartz space of rapidly
decreasing smooth functions on R (see Chap. 5). The corresponding probabil-
ity measure P is constructed by means of the Bochner–Minlos theorem. This
is a classical setting of white noise theory. This approach has the advantage
that the Malliavin derivative D
t
F of a random variable F : S

−→ R can
simply be regarded as a stochastic gradient.
In fact, if γ is deterministic and in L
2
(R) (note that L
2
(R) ⊂S


), we define
the directional derivative of F in the direction γ, D
γ
F , as follows:
D
γ
F (ω) = lim
ε→0
F (ω + εγ) − F(ω)
ε
,ω∈S

,
if the limit exists in L
2
(P ). If there exists a process Ψ (ω, t):Ω × R −→ R
such that
D
γ
F (ω)=

R
Ψ(ω, t)γ(t)dt, ω ∈S

for all γ ∈ L
2
(R), then we say that F is Malliavin–Hida differentiable and we
define
D
t

F (ω):=Ψ (ω, t),ω∈S

as the Malliavin–(Hida) derivative (or stochastic gradient)ofF at t.
This gives a simple and intuitive interpretation of the Malliavin deriva-
tive in the Brownian motion case. Moreover, some of the basic properties of
calculus such as chain rule follow easily from this definition. See Chap. 6.
Alternatively, the Malliavin derivative can also be introduced by means of
the Wiener–Itˆo chaos expansion [119]:
F =


n=0
I
n
(f
n
)
Introduction 3
of the random variable F as a series of iterated Itˆo integrals of symmetric
functions f
n
∈ L
2
(R
n
) with respect to Brownian motion. In this setting, the
Malliavin derivative gets the form
D
t
F =



n=1
nI
n−1
(f
n
(·,t)),
see Chap. 3, cf. [168]. This form is appealing because it has some resemblance
to the derivative of a monomial:
d
dx
x
n
= nx
n−1
.
Moreover, the chaos expansion approach is convenient because it gives
easy proofs of the Clark–Ocone formula and several basic properties of the
Malliavin derivative.
The chaos expansion approach also has the advantage that it carries over in
a natural way to the L´evy process setting (see Chap. 12). This provides us with
a relatively unified approach, valid for both the continuous and discontinuous
case, that is, for both Brownian motion and L´evy processes/Poisson random
measures. See, for example, the proof of the Clark–Ocone theorem in the
two cases. At the same time it is important to be aware of the differences
between these two cases. For example, in the continuous case, we base the
interpretation of the Malliavin derviative as a stochastic gradient, while in the
discontinuous case, the Malliavin derivative is actually a difference operator.
How to use this book

It is the purpose of this book to give an introductory presentation of the theory
of Malliavin calculus and its applications, mainly to finance. For pedagogical
reasons, and also to make the reading easier and the use more flexible, the
book is divided into two parts:
Part I. The Continuous Case: Brownian Motion
Part II. The Discontinuous Case: Pure Jump L´evy Processes
In both parts the emphasis is on the topics that are most central for the
applications to finance. The results are illustrated throughout with examples.
In addition, each chapter ends with exercises. Solutions to some selection of
exercises, with varying level of detail, can be found at the back of the book.
We hope the book will be useful as a graduate text book and as a source
for students and researchers in mathematics and finance. There are several
possible ways of selecting topics when using this book, for example, in a
graduate course:
Alternative 1. If there is enough time, all eighteen chapters could be included
in the program.
4 Introduction
Alternative 2. If the interest is only in the continuous case, then the whole
Part I gives a progressive overview of the theory, including the white noise
approach, and gives a good taste of the applications.
Alternative 3. Similarly, if the readers are already familiar with the continuous
case, then Part II is self-contained and provides a good text choice to cover
both theory and applications.
Alternative 4. If the interest is in an introductory overview on both the con-
tinuous and the discontinuous case, then a good selection could be the reading
from Chaps. 1 to 4 and then from Chaps. 9 to 12. This can be possibly sup-
plemented by the reading of the chapters specifically devoted to applications,
so according to interest one could choose among Chaps. 8, 15, 16, and also
Chaps. 17 and 18.
1

The Wiener–Itˆo Chaos Expansion
The celebrated Wiener–Itˆo chaos expansion is fundamental in stochastic
analysis. In particular, it plays a crucial role in the Malliavin calculus as
it is presented in the sequel. This result which concerns the representation of
square integrable random variables in terms of an infinite orthogonal sum was
proved in its first version by Wiener in 1938 [226]. Later, in 1951, Itˆo [119]
showed that the expansion could be expressed in terms of iterated Itˆointegrals
in the Wiener space setting.
Before we state the theorem we introduce some useful notation and give
some auxiliary results.
1.1 Iterated Itˆo Integrals
Let W = W (t)=W(t, ω), t ∈ [0,T], ω ∈ Ω (T>0), be a one-dimensional
Wiener process, or equivalently Brownian motion, on the complete probability
space (Ω,F,P) such that W (0) = 0 P -a.s.
For any t,letF
t
be the σ-algebra generated by W (s), 0 ≤ s ≤ t, augmented
by all the P -zero measure events. We denote the corresponding filtration by
F = {F
t
,t∈ [0,T]}. (1.1)
Note that this filtration is both left- and right-continuous, that is,
F
t
= lim
st
F
s
:= σ



s<t
F
s

,
respectively,
F
t
= lim
ut
F
u
:=

u>t
F
u
.
See, for example, [128] or [206].
G.D. Nunno et al., Malliavin Calculus for L´evy Processes with Applications 7
to Finance,
c
 Springer-Verlag Berlin Heidelberg 2009
8 1 The Wiener–Itˆo Chaos Expansion
Definition 1.1. Arealfunctiong :[0,T]
n
→ R is called symmetric if
g(t
σ

1
, ,t
σ
n
)=g(t
1
, ,t
n
) (1.2)
for all permutations σ =(σ
1
, , σ
n
) of (1, 2, ,n).
Let L
2
([0,T]
n
) be the standard space of square integrable Borel real func-
tions on [0,T]
n
such that
g
2
L
2
([0,T ]
n
)
:=


[0,T ]
n
g
2
(t
1
, ,t
n
)dt
1
···dt
n
< ∞. (1.3)
Let

L
2
([0,T]
n
) ⊂ L
2
([0,T]
n
) be the space of symmetric square integrable
Borel real functions on [0,T]
n
. Let us consider the set
S
n

= {(t
1
, ,t
n
) ∈ [0,T]
n
:0≤ t
1
≤ t
2
≤···≤t
n
≤ T }.
Note that this set S
n
occupies the fraction
1
n!
of the whole n-dimensional box
[0,T]
n
. Therefore, if g ∈

L
2
([0,T]
n
) then g
|S
n

∈ L
2
(S
n
)and
g
2
L
2
([0,T ]
n
)
= n!

S
n
g
2
(t
1
, ,t
n
)dt
1
dt
n
= n!g
2
L
2

(S
n
)
, (1.4)
where ·
L
2
(S
n
)
denotes the norm induced by L
2
([0,T]
n
)onL
2
(S
n
), the space
of the square integrable functions on S
n
.
If f is a real function on [0,T]
n
, then its symmetrization

f is defined by

f(t
1

, ,t
n
)=
1
n!

σ
f(t
σ
1
, ,t
σ
n
), (1.5)
where the sum is taken over all permutations σ of (1, ,n). Note that

f = f
if and only if f is symmetric.
Example 1.2. The symmetrization of the function
f(t
1
,t
2
)=t
2
1
+ t
2
sin t
1

, (t
1
,t
2
) ∈ [0,T]
2
,
is

f(t
1
,t
2
)=
1
2

t
2
1
+ t
2
2
+ t
2
sin t
1
+ t
1
sin t

2

, (t
1
,t
2
) ∈ [0,T]
2
.
Definition 1.3. Let f be a deterministic function defined on S
n
(n ≥ 1) such
that
f
2
L
2
(S
n
)
:=

S
n
f
2
(t
1
, ,t
n

)dt
1
···dt
n
< ∞.
Then we can define the n-fold iterated Itˆo integral as
J
n
(f):=
T

0
t
n

0
···
t
3

0
t
2

0
f(t
1
, ,t
n
)dW (t

1
)dW (t
2
) ···dW (t
n−1
)dW (t
n
).
(1.6)
1.1 Iterated ItˆoIntegrals 9
Note that at each iteration i =1, , n the corresponding Itˆo integral with
respect to dW (t
i
) is well-defined, being the integrand

t
i
0
···

t
2
0
f(t
1
, , t
n
)
dW (t
1

) dW (t
i−1
), t
i
∈ [0,t
i+1
], a stochastic process that is F-adapted and
square integrable with respect to dP × dt
i
. Thus, (1.6) is well-defined.
Thanks to the construction of the Itˆo integral we have that J
n
(f) belongs
to L
2
(P ), that is, the space of square integrable random variables. We denote
the norm of X ∈ L
2
(P )by
X
L
2
(P )
:=

E

X
2



1/2
=



X
2
(ω)P (dω)

1/2
.
Applying the Itˆo isometry iteratively, if g ∈ L
2
(S
m
)andh ∈ L
2
(S
n
), with
m<n, we can see that
E

J
m
(g)J
n
(h)


= E

T

0
s
m

0
···
s
2

0
g(s
1
, ,s
m
)dW (s
1
) ···dW (s
m
)

·

T

0
s

m

0
···
t
2

0
h(t
1
, ,t
n−m
,s
1
, ,s
m
)dW (t
1
) ···dW (t
n−m
)dW (s
1
) ···dW (s
m
)

=
T

0

E

s
m

0
···
s
2

0
g(s
1
, ,s
m−1
,s
m
)dW (s
1
) ···dW (s
m−1
)

·

s
m

0
···

t
2

0
h(t
1
, ,s
m−1
,s
m
)dW (t
1
) ···dW (s
m−1
)

ds
m
=
=
T

0
s
m

0
···
s
2


0
g(s
1
,s
2
, ,s
m
)E

s
1

0
···
t
2

0
h(t
1
, ,t
n−m
,s
1
, ,s
m
)
· dW (t
1

) ···dW (t
n−m
)

ds
1
···ds
m
=0
(1.7)
because the expected value of an Itˆo integral is zero. On the contrary, if both
g and h belong to L
2
(S
n
), then
E

J
n
(g)J
n
(h)

=
T

0
E


s
n

0
···
s
2

0
g(s
1
, ,s
n
)dW (s
1
) ···dW (s
n−1
)
·
s
n

0
···
s
2

0
h(s
1

, ,s
n
)dW (s
1
) ···dW (s
n−1
)

ds
n
=
=
T

0
···
s
2

0
g(s
1
, ,s
n
)h(s
1
, ,s
n
)ds
1

···ds
n
=(g,h)
L
2
(S
n
)
(1.8)
We summarize these results as follows.
10 1 The Wiener–Itˆo Chaos Expansion
Proposition 1.4. The following relations hold true:
E[J
m
(g)J
n
(h)] =

0 ,n= m
(g, h)
L
2
(S
n
)
,n= m
(m, n =1, 2, ), (1.9)
where
(g, h)
L

2
(S
n
)
:=

S
n
g(t
1
, ,t
n
)h(t
1
, ,t
n
)dt
1
···dt
n
is the inner product of L
2
(S
n
). In particular, we have
J
n
(h)
L
2

(P )
= h
L
2
(S
n
)
. (1.10)
Remark 1.5. Note that (1.9) also holds for n =0orm = 0 if we define
J
0
(g)=g, when g is a constant, and (g, h)
L
2
(S
0
)
= gh, when g,h are constants.
Remark 1.6. It is straightforward to see that the n-fold iterated Itˆo integral
L
2
(S
n
)  f =⇒ J
n
(f) ∈ L
2
(P )
is a linear operator, that is, J
n

(af + bg)=aJ
n
(f)+bJ
n
(g), for f,g ∈ L
2
(S
n
)
and a, b ∈ R.
Definition 1.7. If g ∈

L
2
([0,T]
n
) we define
I
n
(g):=

[0,T ]
n
g(t
1
, ,t
n
)dW (t
1
) dW(t

n
):=n!J
n
(g). (1.11)
We also call iterated n-fold Itˆo integrals the I
n
(g) here above.
Note that from (1.9) and (1.11) we have
I
n
(g)
2
L
2
(P )
= E[I
2
n
(g)] = E[(n!)
2
J
2
n
(g)]
=(n!)
2
g
2
L
2

(S
n
)
= n!g
2
L
2
([0,T ]
n
)
(1.12)
for all g ∈

L
2
([0,T]
n
). Moreover, if g ∈

L
2
([0,T]
m
)andh ∈

L
2
([0,T]
n
), we

have
E[I
m
(g)I
n
(h)] =

0 ,n= m
(g, h)
L
2
([0,T ]
n
)
,n= m
(m, n =1, 2, ),
with (g, h)
L
2
([0,T ]
n
)
= n!(g, h)
L
2
(Sn)
.
There is a useful formula due to Itˆo [119] for the computation of the
iterated Itˆo integral. This formula relies on the relationship between Hermite
polynomials and the Gaussian distribution density. Recall that the Hermite

polynomials h
n
(x), x ∈ R, n =0, 1, 2, are defined by
h
n
(x)=(−1)
n
e
1
2
x
2
d
n
dx
n
(e

1
2
x
2
),n=0, 1, 2, , (1.13)
1.2 The Wiener–Itˆo Chaos Expansion 11
Thus, the first Hermite polynomials are
h
0
(x)=1,h
1
(x)=x, h

2
(x)=x
2
− 1,h
3
(x)=x
3
− 3x,
h
4
(x)=x
4
− 6x
2
+3,h
5
(x)=x
5
− 10x
3
+15x, .
We also recall that the family of Hermite polynomials constitute an orthogonal
basis for L
2
(R,µ(dx)) if µ(dx)=
1


e
x

2
2
dx (see, e.g., [214]).
Proposition 1.8. If ξ
1

2
, are orthonormal functions in L
2
([0,T]),we
have that
I
n

ξ
⊗α
1
1
ˆ
⊗···
ˆ
⊗ξ
⊗α
m
m

=
m

k=1

h
α
k


T
0
ξ
k
(t)W (t)

, (1.14)
with α
1
+···+α
m
= n.Here⊗ denotes the tensor power and α
k
∈{0, 1, 2, }
for all k.
See [119]. In general, the tensor product f ⊗g of two functions f, g is defined by
(f ⊗ g)(x
1
,x
2
)=f(x
1
)g(x
2
)

and the symmetrized tensor product f
ˆ
⊗g is the symmetrization of f ⊗ g.In
particular, from (1.14), we have
n!
T

0
t
n

0
···
t
2

0
g(t
1
)g(t
2
) ···g(t
n
)dW (t
1
) ···dW (t
n
)=g
n
h

n

θ
g

, (1.15)
for the tensor power of g ∈ L
2
([0,T]).Hereabovewehaveusedg =
g
L
2
([0,T ])
and θ =
T

0
g(t)dW (t).
Example 1.9. Let g ≡ 1andn = 3, then we get
6
T

0
t
3

0
t
2


0
1 dW (t
1
)dW (t
2
)dW (t
3
)=T
3/2
h
3

W (T )
T
1/2

= W
3
(T ) −3TW(T ).
1.2 The Wiener–Itˆo Chaos Expansion
Theorem 1.10. The Wiener–Itˆo chaos expansion. Let ξ be an F
T
-
measurable random variable in L
2
(P ). Then there exists a unique sequence
{f
n
}


n=0
of functions f
n


L
2
([0,T]
n
) such that
ξ =


n=0
I
n
(f
n
), (1.16)
12 1 The Wiener–Itˆo Chaos Expansion
where the convergence is in L
2
(P ). Moreover, we have the isometry
ξ
2
L
2
(P )
=



n=0
n!f
n

2
L
2
([0,T ]
n
)
. (1.17)
Proof By the Itˆo representation theorem there exists an F-adapted process
ϕ
1
(s
1
), 0 ≤ s
1
≤ T, such that
E

T

0
ϕ
2
1
(s
1

)ds
1

≤ E

ξ
2

(1.18)
and
ξ = E[ξ]+
T

0
ϕ
1
(s
1
)dW (s
1
). (1.19)
Define
g
0
= E[ξ].
For almost all s
1
≤ T we can apply the Itˆo representation theorem to ϕ
1
(s

1
)
to conclude that there exists an F-adapted process ϕ
2
(s
2
,s
1
), 0 ≤ s
2
≤ s
1
,
such that
E

s
1

0
ϕ
2
2
(s
2
,s
1
)ds
2


≤ E[ϕ
2
1
(s
1
)] < ∞ (1.20)
and
ϕ
1
(s
1
)=E[ϕ
1
(s
1
)] +
s
1

0
ϕ
2
(s
2
,s
1
)dW (s
2
). (1.21)
Substituting (1.21) in (1.19) we get

ξ = g
0
+
T

0
g
1
(s
1
)dW (s
1
)+
T

0
s
1

0
ϕ
2
(s
2
,s
1
)dW (s
2
)dW (s
1

), (1.22)
where
g
1
(s
1
)=E[ϕ
1
(s
1
)].
Note that by (1.18), (1.20), and the Itˆo isometry we have
E

T

0
s
1

0
ϕ
2
(s
2
,s
1
)dW (s
2
)dW (s

1
)

2

=
T

0
s
1

0
E[ϕ
2
2
(s
2
,s
1
)]ds
2
ds
1
≤ E[ξ
2
].
Similarly, for almost all s
2
≤ s

1
≤ T , we apply the Itˆo representation theorem
to ϕ
2
(s
2
,s
1
) and we get an F-adapted process ϕ
3
(s
3
,s
2
,s
1
), 0 ≤ s
3
≤ s
2
, such
that
1.2 The Wiener–Itˆo Chaos Expansion 13
E

s
2

0
ϕ

2
3
(s
3
,s
2
,s
1
)ds
3

≤ E[ϕ
2
2
(s
2
,s
1
)] < ∞ (1.23)
and
ϕ
2
(s
2
,s
1
)=E[ϕ
2
(s
2

,s
1
)] +
s
2

0
ϕ
3
(s
3
,s
2
,s
1
)dW (s
3
). (1.24)
Substituting (1.24) in (1.22) we get
ξ = g
0
+
T

0
g
1
(s
1
)dW (s

1
)+
T

0
s
1

0
g
2
(s
2
,s
1
)dW (s
2
)dW (s
1
)
+
T

0
s
1

0
s
2


0
ϕ
3
(s
3
,s
2
,s
1
)dW (s
3
)dW (s
2
)dW (s
1
),
where
g
2
(s
2
,s
1
)=E[ϕ
2
(s
2
,s
1

)], 0 ≤ s
2
≤ s
1
≤ T.
By (1.18), (1.20), (1.23), and the Itˆo isometry we have
E

T

0
s
1

0
s
2

0
ϕ
3
(s
3
,s
2
,s
1
)dW (s
3
)dW (s

2
)dW (s
1
)

2

≤ E

ξ
2

.
By iterating this procedure we obtain after n steps a process ϕ
n+1
(t
1
,t
2
, ,
t
n+1
), 0 ≤ t
1
≤ t
2
≤ ··· ≤ t
n+1
≤ T, and n + 1 deterministic functions
g

0
,g
1
, ,g
n
, with g
0
constant and g
k
defined on S
k
for 1 ≤ k ≤ n, such that
ξ =
n

k=0
J
k
(g
k
)+

S
n+1
ϕ
n+1
dW
⊗(n+1)
,
where


S
n+1
ϕ
n+1
dW
⊗(n+1)
:=
T

0
t
n+1

0
···
t
2

0
ϕ
n+1
(t
1
, ,t
n+1
)dW (t
1
) ···dW (t
n+1

)
is the (n + 1)-fold iterated integral of ϕ
n+1
.Moreover,
E


S
n+1
ϕ
n+1
dW
⊗(n+1)

2

≤ E

ξ
2

.
14 1 The Wiener–Itˆo Chaos Expansion
In particular, the family
ψ
n+1
:=

S
n+1

ϕ
n+1
dW
⊗(n+1)
,n=1, 2,
is bounded in L
2
(P ) and, from the Itˆo isometry,

n+1
,J
k
(f
k
))
L
2
(P )
= 0 (1.25)
for k ≤ n, f
k
∈ L
2
([0,T]
k
). Hence we have
ξ
2
L
2

(P )
=
n

k=0
J
k
(g
k
)
2
L
2
(P )
+ ψ
n+1

2
L
2
(P )
.
In particular,
n

k=0
J
k
(g
k

)
2
L
2
(P )
< ∞,n=1, 2,
and therefore


k=0
J
k
(g
k
) is convergent in L
2
(P ). Hence
lim
n→∞
ψ
n+1
=: ψ
exists in L
2
(P ). But by (1.25) we have
(J
k
(f
k
),ψ)

L
2
(P )
=0
for all k and for all f
k
∈ L
2
([0,T]
k
). In particular, by (1.15) this implies that
E

h
k

θ
g

· ψ

=0
for all g ∈ L
2
([0,T]) and for all k ≥ 0, where θ =
T

0
g(t)dW (t). But then, from
the definition of the Hermite polynomials,

E[θ
k
· ψ]=0
for all k ≥ 0, which again implies that
E[exp θ · ψ]=


k=0
1
k!
E[θ
k
· ψ]=0.
Since the family
{exp θ : g ∈ L
2
([0,T])}
is total in L
2
(P ) (see [178, Lemma 4.3.2]), we conclude that ψ =0. Hence,
we conclude
1.2 The Wiener–Itˆo Chaos Expansion 15
ξ =


k=0
J
k
(g
k

) (1.26)
and
ξ
2
L
2
(P )
=


k=0
J
k
(g
k
)
2
L
2
(P )
. (1.27)
Finally, to obtain (1.16)–(1.17) we proceed as follows. The function g
n
is
defined only on S
n
, but we can extend g
n
to [0,T]
n

by putting
g
n
(t
1
, ,t
n
)=0, (t
1
, ,t
n
) ∈ [0,T]
n
\ S
n
.
Now define f
n
:= g
n
to be the symmetrization of g
n
- cf. (1.5). Then
I
n
(f
n
)=n!J
n
(f

n
)=n!J
n
(g
n
)=J
n
(g
n
)
and (1.16) and (1.17) follow from (1.26) and (1.27), respectively. 
Example 1.11. What is the Wiener–Itˆo expansion of ξ = W
2
(T )? From (1.15)
we get
2
T

0
t
2

0
1 dW (t
1
)dW (t
2
)=Th
2


W (T )
T
1/2

= W
2
(T ) −T,
and therefore
ξ = W
2
(T )=T + I
2
(1).
Example 1.12. Note that for a fixed t ∈ (0,T)wehave

T
0

t
2
0
χ
{t
1
<t<t
2
}
(t
1
,t

2
)dW (t
1
)dW (t
2
)=

T
t
W (t)dW (t
2
)=W (t)

W (T ) − W (t)

.
Hence, if we put
ξ = W (t)(W (T ) − W(t)),g(t
1
,t
2
)=χ
{t
1
<t<t
2
}
we can see that
ξ = J
2

(g)=2J
2
(g )=I
2
(f
2
),
where
f
2
(t
1
,t
2
)=g(t
1
,t
2
)=
1
2

χ
{t
1
<t<t
2
}
+ χ
{t

2
<t<t
1
}

.
Here and in the sequel we denote the indicator function by
χ = χ
A
(x)=χ
{x∈A}
:=

1,x∈ A,
0,x/∈ A.

×