Tải bản đầy đủ (.pdf) (419 trang)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.74 MB, 419 trang )


www.pdfgrip.com


www.pdfgrip.com

Giulia Di Nunno · Bernt Øksendal
Frank Proske

Malliavin Calculus
for L´evy Processes
with Applications
to Finance

ABC


www.pdfgrip.com

Giulia Di Nunno
Bernt Øksendal
Frank Proske
Department of Mathematics
University of Oslo
0316 Oslo
Blindern
Norway





ISBN 978-3-540-78571-2

e-ISBN 978-3-540-78572-9

Library of Congress Control Number: 2008933368
Mathematics Subject Classification (2000): 60H05, 60H07, 60H40, 91B28, 93E20, 60G51, 60G57
c 2009 Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law
of September 9, 1965, in its current version, and permission for use must always be obtained from
Springer. Violations are liable to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Cover design : WMXDesign GmbH, Heidelberg
The cover design is based on a graph provided by F.E. Benth, M. Groth, and O. Wallin.
Printed on acid-free paper
987654321
springer.com


www.pdfgrip.com

To Christian. To my parents.
G.D.N.
To Eva.
B.Ø.
To Simone, Paulina and Siegfried.
F.P.



www.pdfgrip.com

Preface

There are already several excellent books on Malliavin calculus. However,
most of them deal only with the theory of Malliavin calculus for Brownian
motion, with [35] as an honorable exception. Moreover, most of them discuss
only the application to regularity results for solutions of SDEs, as this was the
original motivation when Paul Malliavin introduced the infinite-dimensional
calculus in 1978 in [157]. In the recent years, Malliavin calculus has found
many applications in stochastic control and within finance. At the same time,
L´evy processes have become important in financial modeling. In view of this,
we have seen the need for a book that deals with Malliavin calculus for L´evy
processes in general, not just Brownian motion, and that presents some of the
most important and recent applications to finance.
It is the purpose of this book to try to fill this need. In this monograph
we present a general Malliavin calculus for L´evy processes, covering both the
Brownian motion case and the pure jump martingale case via Poisson random
measures, and also some combination of the two. We also present many of the
recent applications to finance, including the following:








The Clark–Ocone theorem and hedging formulae

Minimal variance hedging in incomplete markets
Sensitivity analysis results and efficient computation of the “greeks”
Optimal portfolio with partial information
Optimal portfolio in an anticipating environment
Optimal consumption in a general information setting
Insider trading

To be able to handle these applications, we develop a general theory of
anticipative stochastic calculus for L´evy processes involving the Malliavin
derivative, the Skorohod integral, the forward integral, which were originally
introduced for the Brownian setting only. We dedicate some chapters to the
generalization of our results to the white noise framework, which often turns
out to be a suitable setting for the theory. Moreover, this enables us to prove
VII


www.pdfgrip.com
VIII

Preface

results that are general enough for the financial applications, for example, the
generalized Clark–Ocone theorem.
This book is based on a series of courses that we have given in different
years and to different audiences. The first one was given at the Norwegian
School of Economics and Business Administration (NHH) in Bergen in 1996,
at that time about Brownian motion only. Other courses were held later, every
time including more updated material. In particular, we mention the courses
given at the Department of Mathematics and at the Center of Mathematics for
Applications (CMA) at the University of Oslo and also the intensive or compact courses presented at the University of Ulm in July 2006, at the University

of Cape Town in December 2006, at the Indian Institute of Science (IIS) in
Bangalore in January 2007, and at the Nanyang Technological University in
Singapore in January 2008.
At all these occasions we met engaged students and attentive readers. We
thank all of them for their active participation to the classes and their feedback. Our work has benefitted from the collaboration and useful comments
from many people, including Fred Espen Benth, Delphine David, Inga Baardshaug Eide, Xavier Gabaix, Martin Groth, Yaozhong Hu, Asma Khedher, Paul
Kettler, An Ta Thi Kieu, Jørgen Sjaastad, Thilo Meyer-Brandis, Farai Julius
Mhlanga, Yeliz Yolcu Okur, Olivier Menoukeu Pamen, Ulrich Rieder, Goncalo
Reiss, Alexander Sokol, Agn`es Sulem, Olli Wallin, Diane Wilcox, Frank
Wittemann, Mihail Zervos, Tusheng Zhang, and Xunyu Zhou. We thank them
all for their help. Our special thanks go to Paul Malliavin for the inspiration
and continuous encouragement he has given us throughout the time we have
worked on this book. We also acknowledge with gratitude the technical support with computers of the Drift-IT at the Department of Mathematics at the
University of Oslo.

Oslo,
June 2008.

Giulia Di Nunno
Bernt Øksendal
Frank Proske


www.pdfgrip.com

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1


Part I The Continuous Case: Brownian Motion
1

The Wiener–Itˆ
o Chaos Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1 Iterated Itˆ
o Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 The Wiener–Itˆo Chaos Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2

The Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 The Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Some Basic Properties of the Skorohod Integral . . . . . . . . . . . . .
2.3 The Skorohod Integral as an Extension of the Itˆ
o Integral . . . .
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19
19
22
23
25

3

Malliavin Derivative via Chaos Expansion . . . . . . . . . . . . . . . . .
3.1 The Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.2 Computation and Properties of the Malliavin Derivative . . . . . .
3.2.1 Chain Rules for Malliavin Derivative . . . . . . . . . . . . . . . . .
3.2.2 Malliavin Derivative and Conditional Expectation . . . . .
3.3 Malliavin Derivative and Skorohod Integral . . . . . . . . . . . . . . . . .
3.3.1 Skorohod Integral as Adjoint Operator to the
Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 An Integration by Parts Formula and Closability
of the Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 A Fundamental Theorem of Calculus . . . . . . . . . . . . . . . .
3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27
27
29
29
30
34

4

34
36
37
40

Integral Representations and the Clark–Ocone Formula . . . . 43
4.1 The Clark–Ocone Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 The Clark–Ocone Formula under Change of Measure . . . . . . . . . 45
IX



www.pdfgrip.com
X

Contents

4.3 Application to Finance: Portfolio Selection . . . . . . . . . . . . . . . . . . 48
4.4 Application to Sensitivity Analysis and Computation
of the “Greeks” in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5

White Noise, the Wick Product, and Stochastic
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 White Noise Probability Space . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 The Wiener–Itˆo Chaos Expansion Revisited . . . . . . . . . . . . . . . . .
5.3 The Wick Product and the Hermite Transform . . . . . . . . . . . . . .
5.3.1 Some Basic Properties of the Wick Product . . . . . . . . . . .
5.3.2 Hermite Transform and Characterization
Theorem for (S)∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.3 The Spaces G and G ∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.4 The Wick Product in Terms of Iterated Itˆ
o Integrals . . .
5.3.5 Wick Products and Skorohod Integration . . . . . . . . . . . . .
5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63
63
65
70

72
73
76
78
79
83

6

The Hida–Malliavin Derivative on the Space Ω = S (R) . . . . 85
6.1 A New Definition of the Stochastic Gradient and a Generalized
Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.2 Calculus of the Hida–Malliavin Derivative
and Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2.1 Wick Product vs. Ordinary Product . . . . . . . . . . . . . . . . . 89
6.2.2 Closability of the Hida–Malliavin Derivative . . . . . . . . . . 90
6.2.3 Wick Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.4 Integration by Parts, Duality Formula,
and Skorohod Isometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3 Conditional Expectation on (S)∗ . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.4 Conditional Expectation on G ∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.5 A Generalized Clark–Ocone Theorem . . . . . . . . . . . . . . . . . . . . . . 99
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

7

The Donsker Delta Function and Applications . . . . . . . . . . . . . 109
7.1 Motivation: An Application of the Donsker Delta Function
to Hedging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2 The Donsker Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

7.3 The Multidimensional Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

8

The Forward Integral and Applications . . . . . . . . . . . . . . . . . . . . 129
8.1 A Motivating Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.2 The Forward Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.3 Itˆ
o Formula for Forward Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 135


www.pdfgrip.com
Contents

XI

8.4 Relation Between the Forward Integral
and the Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.5 Itˆ
o Formula for Skorohod Integrals . . . . . . . . . . . . . . . . . . . . . . . . 140
8.6 Application to Insider Trading Modeling . . . . . . . . . . . . . . . . . . . 142
8.6.1 Markets with No Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.6.2 Markets with Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Part II The Discontinuous Case: Pure Jump L´
evy Processes
9

A Short Introduction to L´

evy Processes . . . . . . . . . . . . . . . . . . . 159
9.1 Basics on L´evy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.2 The Itˆ
o Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.3 The Itˆ
o Representation Theorem for Pure Jump
L´evy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.4 Application to Finance: Replicability . . . . . . . . . . . . . . . . . . . . . . . 169
9.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

10 The Wiener–Itˆ
o Chaos Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.1 Iterated Itˆ
o Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.2 The Wiener–Itˆo Chaos Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
11 Skorohod Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
11.1 The Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
11.2 The Skorohod Integral as an Extension of the Itˆ
o Integral . . . . 182
11.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
12 The Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.2 Chain Rules for Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . 188
12.3 Malliavin Derivative and Skorohod Integral . . . . . . . . . . . . . . . . . 190
12.3.1 Skorohod Integral as Adjoint Operator
to the Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.3.2 Integration by Parts and Closability
of the Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.3.3 Fundamental Theorem of Calculus . . . . . . . . . . . . . . . . . . . 192

12.4 The Clark–Ocone Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
12.5 A Combination of Gaussian and Pure Jump L´evy Noises . . . . . 195
12.6 Application to Minimal Variance Hedging with Partial
Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
12.7 Computation of “Greeks” in the Case of Jump Diffusions . . . . . 204
12.7.1 The Barndorff–Nielsen and Shephard Model . . . . . . . . . . 205
12.7.2 Malliavin Weights for “Greeks” . . . . . . . . . . . . . . . . . . . . . 207
12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211


www.pdfgrip.com
XII

Contents

13 L´
evy White Noise and Stochastic Distributions . . . . . . . . . . . . 213
13.1 The White Noise Probability Space . . . . . . . . . . . . . . . . . . . . . . . . 213
13.2 An Alternative Chaos Expansion and the White Noise . . . . . . . 214
13.3 The Wick Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
13.3.1 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
13.3.2 Wick Product and Skorohod Integral . . . . . . . . . . . . . . . . 222
13.3.3 Wick Product vs. Ordinary Product . . . . . . . . . . . . . . . . . 225
13.3.4 L´evy–Hermite Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.4 Spaces of Smooth and Generalized Random Variables:
G and G ∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13.5 The Malliavin Derivative on G ∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
13.6 A Generalization of the Clark–Ocone Theorem . . . . . . . . . . . . . . 230
13.7 A Combination of Gaussian and Pure Jump L´evy Noises
in the White Noise Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

13.8 Generalized Chain Rules for the Malliavin Derivative . . . . . . . . 237
13.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14 The Donsker Delta Function of a L´
evy Process
and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
14.1 The Donsker Delta Function of a Pure Jump L´evy Process . . . . 242
14.2 An Explicit Formula for the Donsker Delta Function . . . . . . . . . 242
14.3 Chaos Expansion of Local Time for L´evy Processes . . . . . . . . . . 247
14.4 Application to Hedging in Incomplete Markets . . . . . . . . . . . . . . 253
14.5 A Sensitivity Result for Jump Diffusions . . . . . . . . . . . . . . . . . . . 256
14.5.1 A Representation Theorem for Functions
of a Class of Jump Diffusions . . . . . . . . . . . . . . . . . . . . . . . 256
14.5.2 Application: Computation of the “Greeks” . . . . . . . . . . . . 261
14.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
15 The Forward Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.1 Definition of Forward Integral and its Relation
with the Skorohod Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.2 Itˆ
o Formula for Forward and Skorohod Integrals . . . . . . . . . . . . . 268
15.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
16 Applications to Stochastic Control: Partial
and Inside Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
16.1 The Importance of Information in Portfolio Optimization . . . . . 273
16.2 Optimal Portfolio Problem under Partial Information . . . . . . . . 274
16.2.1 Formalization of the Optimization Problem:
General Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
16.2.2 Characterization of an Optimal Portfolio
Under Partial Information . . . . . . . . . . . . . . . . . . . . . . . . . . 276
16.2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
16.3 Optimal Portfolio under Partial Information

in an Anticipating Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 286


www.pdfgrip.com
Contents

16.4

16.5

16.6

16.7

XIII

16.3.1 The Continuous Case: Logarithmic Utility . . . . . . . . . . . . 289
16.3.2 The Pure Jump Case: Logarithmic Utility . . . . . . . . . . . . 293
A Universal Optimal Consumption Rate for an Insider . . . . . . . 298
16.4.1 Formalization of a General Optimal
Consumption Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
16.4.2 Characterization of an Optimal Consumption Rate . . . . 301
16.4.3 Optimal Consumption and Portfolio . . . . . . . . . . . . . . . . . 305
Optimal Portfolio Problem under Inside Information . . . . . . . . . 307
16.5.1 Formalization of the Optimization Problem:
General Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
16.5.2 Characterization of an Optimal Portfolio
under Inside Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
16.5.3 Examples: General Utility and Enlargement
of Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

Optimal Portfolio Problem under Inside Information:
Logarithmic Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
16.6.1 The Pure Jump Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
16.6.2 A Mixed Market Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
16.6.3 Examples: Enlargement of Filtration . . . . . . . . . . . . . . . . . 324
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

17 Regularity of Solutions of SDEs Driven
by L´
evy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
17.1 The Pure Jump Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
17.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
17.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
18 Absolute Continuity of Probability Laws . . . . . . . . . . . . . . . . . . . 341
18.1 Existence of Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
18.2 Smooth Densities of Solutions to SDE’s Driven
by L´evy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
18.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Appendix A: Malliavin Calculus on the Wiener Space . . . . . . . . . 349
A.1 Preliminary Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
A.2 Wiener Space, Cameron–Martin Space,
and Stochastic Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
A.3 Malliavin Derivative via Chaos Expansions . . . . . . . . . . . . . . . . . 359
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Notation and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411


www.pdfgrip.com


Introduction

The mathematical theory now known as Malliavin calculus was first introduced by Paul Malliavin in [157] as an infinite-dimensional integration by
parts technique. The purpose of this calculus was to prove the results about
the smoothness of densities of solutions of stochastic differential equations
driven by Brownian motion. For several years this was the only known application. Therefore, since this theory was considered quite complicated by many,
Malliavin calculus remained a relatively unknown theory also among mathematicians for some time. Many mathematicians simply considered the theory
as too difficult when compared with the results it produced. Moreover, to a
large extent, these results could also be obtained by using Hă
ormanders earlier
theory on hypoelliptic operators. See also, for example, [20, 113, 224, 229].
This was the situation until 1984, when Ocone in [172] obtained an explicit
interpretation of the Clark representation formula [46, 47] in terms of the
Malliavin derivative. This remarkable result later became known as the Clark–
Ocone formula. Sometimes also called Clark–Haussmann–Ocone formula in
view of the contribution of Haussmann in 1979, see [97]. In 1991, Ocone and
Karatzas [173] applied this result to finance. They proved that the Clark–
Ocone formula can be used to obtain explicit formulae for replicating portfolios
of contingent claims in complete markets.
Since then, new literature helped to distribute these results to a wider
audience, both among mathematicians and researchers in finance. See, for
example, the monographs [53, 160, 168, 211] and the introductory lecture
notes [177]; see also [205].
The next breakthrough came in 1999, when Fourni´e et al. [80] obtained
numerically tractable formulae for the computation of the so-called greeks in
finance, also known as parameters of sensitivity. In the recent years, many
new applications of the Malliavin calculus have been found, including partial
information optimal control, insider trading and, more generally, anticipative
stochastic calculus.

At the same time Malliavin calculus was extended from the original setting
of Brownian motion to more general L´evy processes. This extensions were at
G.D. Nunno et al., Malliavin Calculus for L´
evy Processes with Applications
to Finance,
c Springer-Verlag Berlin Heidelberg 2009

1


www.pdfgrip.com
2

Introduction

first motivated by and taylored to the original application within the study
of smoothness of densities (see e.g. [12, 35, 37, 38, 44, 140, 141, 142, 162,
188, 189, 217, 218]) and then developed largely targeting the applications to
finance, where L´evy processes based models are now widely used (see, e.g.,
[25, 29, 64, 69, 147, 170, 180]). Within this last direction, some extension to
random fields of L´evy type has also been developed, see, for example, [61, 62].
Other extension of Malliavin calculus within quantum probability have also
appeared, see, for example, [83, 84].
One way of interpreting the Malliavin derivative of a given random variable F = F (ω), ω ∈ Ω, on the given probability space (Ω, F, P ) is to regard
it as a derivative with respect to the random parameter ω. For this to make
sense, one needs some mathematical structure on the space Ω. In the original
approach used by Malliavin, for the Brownian motion case, Ω is represented
as the Wiener space C0 ([0, T ]) of continuous functions ω : [0, T ] −→ R with
ω(0) = 0, equipped with the uniform topology. In this book we prefer to use
the representation of Hida [98], namely to represent Ω as the space S of tempered distributions ω : S −→ R, where S is the Schwartz space of rapidly

decreasing smooth functions on R (see Chap. 5). The corresponding probability measure P is constructed by means of the Bochner–Minlos theorem. This
is a classical setting of white noise theory. This approach has the advantage
that the Malliavin derivative Dt F of a random variable F : S −→ R can
simply be regarded as a stochastic gradient.
In fact, if γ is deterministic and in L2 (R) (note that L2 (R) ⊂ S ), we define
the directional derivative of F in the direction γ, Dγ F , as follows:
Dγ F (ω) = lim

ε→0

F (ω + εγ) − F (ω)
,
ε

ω∈S,

if the limit exists in L2 (P ). If there exists a process Ψ (ω, t) : Ω × R −→ R
such that
Ψ (ω, t)γ(t)dt, ω ∈ S
Dγ F (ω) =
R

for all γ ∈ L2 (R), then we say that F is Malliavin–Hida differentiable and we
define
ω∈S
Dt F (ω) := Ψ (ω, t),
as the Malliavin–(Hida) derivative (or stochastic gradient) of F at t.
This gives a simple and intuitive interpretation of the Malliavin derivative in the Brownian motion case. Moreover, some of the basic properties of
calculus such as chain rule follow easily from this definition. See Chap. 6.
Alternatively, the Malliavin derivative can also be introduced by means of

the Wiener–Itˆ
o chaos expansion [119]:


In (fn )

F =
n=0


www.pdfgrip.com
Introduction

3

of the random variable F as a series of iterated Itˆo integrals of symmetric
functions fn ∈ L2 (Rn ) with respect to Brownian motion. In this setting, the
Malliavin derivative gets the form


Dt F =

nIn−1 (fn (·, t)),
n=1

see Chap. 3, cf. [168]. This form is appealing because it has some resemblance
to the derivative of a monomial:
d n
x = nxn−1 .
dx

Moreover, the chaos expansion approach is convenient because it gives
easy proofs of the Clark–Ocone formula and several basic properties of the
Malliavin derivative.
The chaos expansion approach also has the advantage that it carries over in
a natural way to the L´evy process setting (see Chap. 12). This provides us with
a relatively unified approach, valid for both the continuous and discontinuous
case, that is, for both Brownian motion and L´evy processes/Poisson random
measures. See, for example, the proof of the Clark–Ocone theorem in the
two cases. At the same time it is important to be aware of the differences
between these two cases. For example, in the continuous case, we base the
interpretation of the Malliavin derviative as a stochastic gradient, while in the
discontinuous case, the Malliavin derivative is actually a difference operator.
How to use this book
It is the purpose of this book to give an introductory presentation of the theory
of Malliavin calculus and its applications, mainly to finance. For pedagogical
reasons, and also to make the reading easier and the use more flexible, the
book is divided into two parts:
Part I. The Continuous Case: Brownian Motion
Part II. The Discontinuous Case: Pure Jump L´evy Processes
In both parts the emphasis is on the topics that are most central for the
applications to finance. The results are illustrated throughout with examples.
In addition, each chapter ends with exercises. Solutions to some selection of
exercises, with varying level of detail, can be found at the back of the book.
We hope the book will be useful as a graduate text book and as a source
for students and researchers in mathematics and finance. There are several
possible ways of selecting topics when using this book, for example, in a
graduate course:
Alternative 1. If there is enough time, all eighteen chapters could be included
in the program.



www.pdfgrip.com
4

Introduction

Alternative 2. If the interest is only in the continuous case, then the whole
Part I gives a progressive overview of the theory, including the white noise
approach, and gives a good taste of the applications.
Alternative 3. Similarly, if the readers are already familiar with the continuous
case, then Part II is self-contained and provides a good text choice to cover
both theory and applications.
Alternative 4. If the interest is in an introductory overview on both the continuous and the discontinuous case, then a good selection could be the reading
from Chaps. 1 to 4 and then from Chaps. 9 to 12. This can be possibly supplemented by the reading of the chapters specifically devoted to applications,
so according to interest one could choose among Chaps. 8, 15, 16, and also
Chaps. 17 and 18.


www.pdfgrip.com

1
The Wiener–Itˆ
o Chaos Expansion

The celebrated Wiener–Itˆo chaos expansion is fundamental in stochastic
analysis. In particular, it plays a crucial role in the Malliavin calculus as
it is presented in the sequel. This result which concerns the representation of
square integrable random variables in terms of an infinite orthogonal sum was
proved in its first version by Wiener in 1938 [226]. Later, in 1951, Itˆ
o [119]

showed that the expansion could be expressed in terms of iterated Itˆ
o integrals
in the Wiener space setting.
Before we state the theorem we introduce some useful notation and give
some auxiliary results.

1.1 Iterated Itˆ
o Integrals
Let W = W (t) = W (t, ω), t ∈ [0, T ], ω ∈ Ω (T > 0), be a one-dimensional
Wiener process, or equivalently Brownian motion, on the complete probability
space (Ω, F, P ) such that W (0) = 0 P -a.s.
For any t, let Ft be the σ-algebra generated by W (s), 0 ≤ s ≤ t, augmented
by all the P -zero measure events. We denote the corresponding filtration by
F = {Ft , t ∈ [0, T ]} .

(1.1)

Note that this filtration is both left- and right-continuous, that is,
Ft = lim Fs := σ
s

t

Fs ,
s
respectively,
Ft = lim Fu :=
u


t

Fu .
u>t

See, for example, [128] or [206].
G.D. Nunno et al., Malliavin Calculus for L´
evy Processes with Applications
to Finance,
c Springer-Verlag Berlin Heidelberg 2009

7


www.pdfgrip.com
8

1 The Wiener–Itˆ
o Chaos Expansion

Definition 1.1. A real function g : [0, T ]n → R is called symmetric if
g(tσ1 , . . . , tσn ) = g(t1 , . . . , tn )

(1.2)

for all permutations σ = (σ 1 , ..., σ n ) of (1, 2, . . . , n).
Let L2 ([0, T ]n ) be the standard space of square integrable Borel real functions on [0, T ]n such that
g

2

L2 ([0,T ]n )

g 2 (t1 , . . . , tn )dt1 · · · dtn < ∞.

:=

(1.3)

[0,T ]n

Let L2 ([0, T ]n ) ⊂ L2 ([0, T ]n ) be the space of symmetric square integrable
Borel real functions on [0, T ]n . Let us consider the set
Sn = {(t1 , . . . , tn ) ∈ [0, T ]n : 0 ≤ t1 ≤ t2 ≤ · · · ≤ tn ≤ T }.
1
of the whole n-dimensional box
Note that this set Sn occupies the fraction n!
n
2
n
[0, T ] . Therefore, if g ∈ L ([0, T ] ) then g|Sn ∈ L2 (Sn ) and

g

2
L2 ([0,T ]n )

= n!

g 2 (t1 , . . . , tn )dt1 . . . dtn = n! g


2
L2 (Sn ) ,

(1.4)

Sn

where · L2 (Sn ) denotes the norm induced by L2 ([0, T ]n ) on L2 (Sn ), the space
of the square integrable functions on Sn .
If f is a real function on [0, T ]n , then its symmetrization f is defined by
f (t1 , . . . , tn ) =

1
n!

f (tσ1 , . . . , tσn ),

(1.5)

σ

where the sum is taken over all permutations σ of (1, . . . , n). Note that f = f
if and only if f is symmetric.
Example 1.2. The symmetrization of the function
f (t1 , t2 ) = t21 + t2 sin t1 ,

(t1 , t2 ) ∈ [0, T ]2 ,

is


1 2
t + t22 + t2 sin t1 + t1 sin t2 , (t1 , t2 ) ∈ [0, T ]2 .
2 1
Definition 1.3. Let f be a deterministic function defined on Sn (n ≥ 1) such
that
f 2L2 (Sn ) := f 2 (t1 , . . . , tn )dt1 · · · dtn < ∞.
f (t1 , t2 ) =

Sn

Then we can define the n-fold iterated Itˆ
o integral as
t3 t2

T tn

···

Jn (f ) :=
0

0

f (t1 , . . . , tn )dW (t1 )dW (t2 ) · · · dW (tn−1 )dW (tn ).
0

0

(1.6)



www.pdfgrip.com
1.1 Iterated Itˆ
o Integrals

9

Note that at each iteration i = 1, ..., n the corresponding Itˆ
o integral with
t
t
respect to dW (ti ) is well-defined, being the integrand 0 i · · · 0 2 f (t1 , ..., tn )
dW (t1 )...dW (ti−1 ), ti ∈ [0, ti+1 ], a stochastic process that is F-adapted and
square integrable with respect to dP × dti . Thus, (1.6) is well-defined.
Thanks to the construction of the Itˆ
o integral we have that Jn (f ) belongs
to L2 (P ), that is, the space of square integrable random variables. We denote
the norm of X ∈ L2 (P ) by
X

1/2

:= E X 2

L2 (P )

X 2 (ω)P (dω)

=


1/2

.



Applying the Itˆ
o isometry iteratively, if g ∈ L2 (Sm ) and h ∈ L2 (Sn ), with
m < n, we can see that
s2

T sm

···

E Jm (g)Jn (h) = E
0

···
0

0
s2

sm

···

E
0


0

···
0

h(t1 , . . . , sm−1 , sm )dW (t1 ) · · · dW (sm−1 )
s2

···
0

(1.7)

dsm = . . .

0

T sm

=

g(s1 , . . . , sm−1 , sm )dW (s1 ) · · · dW (sm−1 )
0

t2

sm

·


0

h(t1 , . . . , tn−m , s1 , . . . , sm )dW (t1 ) · · · dW (tn−m )dW (s1 ) · · · dW (sm )

0

T

=

g(s1 , . . . , sm )dW (s1 ) · · · dW (sm )
0

t2

T sm

·

0

s1

···

g(s1 , s2 , . . . , sm )E
0

t2


0

h(t1 , . . . , tn−m , s1 , . . . , sm )
0

· dW (t1 ) · · · dW (tn−m ) ds1 · · · dsm = 0

because the expected value of an Itˆ
o integral is zero. On the contrary, if both
g and h belong to L2 (Sn ), then
sn

T

E Jn (g)Jn (h) =

···

E
0
sn

0
s2

···

·
0


0
s2

···
0

g(s1 , . . . , sn )dW (s1 ) · · · dW (sn−1 )
0

h(s1 , . . . , sn )dW (s1 ) · · · dW (sn−1 ) dsn = . . .

T

=

s2

(1.8)

g(s1 , . . . , sn )h(s1 , . . . , sn )ds1 · · · dsn = (g, h)L2 (Sn )
0

We summarize these results as follows.


www.pdfgrip.com
10

1 The Wiener–Itˆ

o Chaos Expansion

Proposition 1.4. The following relations hold true:
E[Jm (g)Jn (h)] =

0
, n=m
(g, h)L2 (Sn ) , n = m

(m, n = 1, 2, ...),

(1.9)

where
g(t1 , . . . , tn )h(t1 , . . . , tn )dt1 · · · dtn

(g, h)L2 (Sn ) :=
Sn
2

is the inner product of L (Sn ). In particular, we have
Jn (h)

L2 (P )

= h

L2 (Sn ) .

(1.10)


Remark 1.5. Note that (1.9) also holds for n = 0 or m = 0 if we define
J0 (g) = g, when g is a constant, and (g, h)L2 (S0 ) = gh, when g, h are constants.
Remark 1.6. It is straightforward to see that the n-fold iterated Itˆ
o integral
L2 (Sn )

f

Jn (f ) ∈ L2 (P )

=⇒

is a linear operator, that is, Jn (af + bg) = aJn (f ) + bJn (g), for f, g ∈ L2 (Sn )
and a, b ∈ R.
Definition 1.7. If g ∈ L2 ([0, T ]n ) we define
In (g) :=

g(t1 , . . . , tn )dW (t1 ) . . . dW (tn ) := n!Jn (g).

(1.11)

[0,T ]n

We also call iterated n-fold Itˆ
o integrals the In (g) here above.
Note that from (1.9) and (1.11) we have
In (g)

2

L2 (P )

= E[In2 (g)] = E[(n!)2 Jn2 (g)]
= (n!)2 g

2
L2 (Sn )

= n! g

2
L2 ([0,T ]n )

(1.12)

for all g ∈ L2 ([0, T ]n ). Moreover, if g ∈ L2 ([0, T ]m ) and h ∈ L2 ([0, T ]n ), we
have
E[Im (g)In (h)] =

0
, n=m
(g, h)L2 ([0,T ]n ) , n = m

(m, n = 1, 2, ...),

with (g, h)L2 ([0,T ]n ) = n!(g, h)L2 (Sn) .
There is a useful formula due to Itˆ
o [119] for the computation of the
iterated Itˆ
o integral. This formula relies on the relationship between Hermite

polynomials and the Gaussian distribution density. Recall that the Hermite
polynomials hn (x), x ∈ R, n = 0, 1, 2, . . . are defined by
1

hn (x) = (−1)n e 2

x2

dn − 12 x2
(e
),
dxn

n = 0, 1, 2, . . . ,

(1.13)


www.pdfgrip.com
1.2 The Wiener–Itˆ
o Chaos Expansion

11

Thus, the first Hermite polynomials are
h0 (x) = 1, h1 (x) = x, h2 (x) = x2 − 1, h3 (x) = x3 − 3x,
h4 (x) = x4 − 6x2 + 3, h5 (x) = x5 − 10x3 + 15x, . . . .
We also recall that the family of Hermite polynomials constitute an orthogonal
x2
basis for L2 (R, µ(dx)) if µ(dx) = √12π e 2 dx (see, e.g., [214]).

Proposition 1.8. If ξ 1 , ξ 2 , ... are orthonormal functions in L2 ([0, T ]), we
have that
m

1 ˆ
m
ˆ ⊗α
=
In ξ ⊗α
⊗ · · · ⊗ξ
m
1

T

hαk

ξ k (t)W (t) ,

(1.14)

0

k=1

with α1 +· · ·+αm = n. Here ⊗ denotes the tensor power and αk ∈ {0, 1, 2, ...}
for all k.
See [119]. In general, the tensor product f ⊗g of two functions f, g is defined by
(f ⊗ g)(x1 , x2 ) = f (x1 )g(x2 )
ˆ is the symmetrization of f ⊗ g. In

and the symmetrized tensor product f ⊗g
particular, from (1.14), we have
t2

T tn

···

n!
0

0

g(t1 )g(t2 ) · · · g(tn )dW (t1 ) · · · dW (tn ) = g

n

hn

θ
g

, (1.15)

0

for the tensor power of g ∈ L2 ([0, T ]). Here above we have used

g


=

T

g

L2 ([0,T ])

and θ =

g(t)dW (t).
0

Example 1.9. Let g ≡ 1 and n = 3, then we get
T t3 t2

1 dW (t1 )dW (t2 )dW (t3 ) = T 3/2 h3

6
0

0

W (T )
= W 3 (T ) − 3T W (T ).
T 1/2

0

1.2 The Wiener–Itˆ

o Chaos Expansion
Theorem 1.10. The Wiener–Itˆ
o chaos expansion. Let ξ be an FT measurable random variable in L2 (P ). Then there exists a unique sequence
2
n
{fn }∞
n=0 of functions fn ∈ L ([0, T ] ) such that


ξ=

In (fn ),
n=0

(1.16)


www.pdfgrip.com
12

1 The Wiener–Itˆ
o Chaos Expansion

where the convergence is in L2 (P ). Moreover, we have the isometry


ξ

2
L2 (P )


=

n! fn

2
L2 ([0,T ]n ) .

(1.17)

n=0

Proof By the Itˆ
o representation theorem there exists an F-adapted process
ϕ1 (s1 ), 0 ≤ s1 ≤ T, such that
T

ϕ21 (s1 )ds1 ≤ E ξ 2

E

(1.18)

0

and

T

ϕ1 (s1 )dW (s1 ).


ξ = E[ξ] +

(1.19)

0

Define
g0 = E[ξ].
For almost all s1 ≤ T we can apply the Itˆ
o representation theorem to ϕ1 (s1 )
to conclude that there exists an F-adapted process ϕ2 (s2 , s1 ), 0 ≤ s2 ≤ s1 ,
such that
s1

ϕ22 (s2 , s1 )ds2 ≤ E[ϕ21 (s1 )] < ∞

E

(1.20)

0

and

s1

ϕ1 (s1 ) = E[ϕ1 (s1 )] +

ϕ2 (s2 , s1 )dW (s2 ).


(1.21)

0

Substituting (1.21) in (1.19) we get
T s1

T

ξ = g0 +

g1 (s1 )dW (s1 ) +
0

ϕ2 (s2 , s1 )dW (s2 )dW (s1 ),
0

(1.22)

0

where
g1 (s1 ) = E[ϕ1 (s1 )].
Note that by (1.18), (1.20), and the Itˆ
o isometry we have
T s1

E


ϕ2 (s2 , s1 )dW (s2 )dW (s1 )
0

0

T s1

2

E[ϕ22 (s2 , s1 )]ds2 ds1 ≤ E[ξ 2 ].

=
0

0

o representation theorem
Similarly, for almost all s2 ≤ s1 ≤ T , we apply the Itˆ
to ϕ2 (s2 , s1 ) and we get an F-adapted process ϕ3 (s3 , s2 , s1 ), 0 ≤ s3 ≤ s2 , such
that


www.pdfgrip.com
1.2 The Wiener–Itˆ
o Chaos Expansion

13

s2


ϕ23 (s3 , s2 , s1 )ds3 ≤ E[ϕ22 (s2 , s1 )] < ∞

E

(1.23)

0

and

s2

ϕ2 (s2 , s1 ) = E[ϕ2 (s2 , s1 )] +

ϕ3 (s3 , s2 , s1 )dW (s3 ).

(1.24)

0

Substituting (1.24) in (1.22) we get
T s1

T

ξ = g0 +

g1 (s1 )dW (s1 ) +
0


g2 (s2 , s1 )dW (s2 )dW (s1 )
0

0

T s1 s2

+

ϕ3 (s3 , s2 , s1 )dW (s3 )dW (s2 )dW (s1 ),
0

0

0

where
0 ≤ s2 ≤ s1 ≤ T.

g2 (s2 , s1 ) = E[ϕ2 (s2 , s1 )],

By (1.18), (1.20), (1.23), and the Itˆ
o isometry we have
T s1 s2

E

2

ϕ3 (s3 , s2 , s1 )dW (s3 )dW (s2 )dW (s1 )

0

0

≤ E ξ2 .

0

By iterating this procedure we obtain after n steps a process ϕn+1 (t1 , t2 , . . .,
tn+1 ), 0 ≤ t1 ≤ t2 ≤ · · · ≤ tn+1 ≤ T, and n + 1 deterministic functions
g0 , g1 , . . . , gn , with g0 constant and gk defined on Sk for 1 ≤ k ≤ n, such that
n

ξ=

ϕn+1 dW ⊗(n+1) ,

Jk (gk ) +
k=0

Sn+1

where
T tn+1

ϕn+1 dW

⊗(n+1)

0


Sn+1

t2

···

:=
0

ϕn+1 (t1 , . . . , tn+1 )dW (t1 ) · · · dW (tn+1 )
0

is the (n + 1)-fold iterated integral of ϕn+1 . Moreover,
ϕn+1 dW ⊗(n+1)

E
Sn+1

2

≤ E ξ2 .


www.pdfgrip.com
14

1 The Wiener–Itˆ
o Chaos Expansion


In particular, the family
ϕn+1 dW ⊗(n+1) ,

ψ n+1 :=

n = 1, 2, . . .

Sn+1

is bounded in L2 (P ) and, from the Itˆ
o isometry,
(ψ n+1 , Jk (fk ))L2 (P ) = 0

(1.25)

for k ≤ n, fk ∈ L2 ([0, T ]k ). Hence we have
n

ξ

2
L2 (P )

=

2
L2 (P )

Jk (gk )


+ ψ n+1

2
L2 (P ) .

k=0

In particular,

n

Jk (gk )

2
L2 (P )

< ∞, n = 1, 2, ...

k=0

and therefore



Jk (gk ) is convergent in L2 (P ). Hence

k=0

lim ψ n+1 =: ψ


n→∞

exists in L2 (P ). But by (1.25) we have
(Jk (fk ), ψ)L2 (P ) = 0
for all k and for all fk ∈ L2 ([0, T ]k ). In particular, by (1.15) this implies that
E hk

θ
g

·ψ =0

for all g ∈ L2 ([0, T ]) and for all k ≥ 0, where θ =

T

g(t)dW (t). But then, from
0

the definition of the Hermite polynomials,
E[θk · ψ] = 0
for all k ≥ 0, which again implies that


E[exp θ · ψ] =
k=0

1
E[θk · ψ] = 0.
k!


Since the family
{exp θ :

g ∈ L2 ([0, T ])}

is total in L2 (P ) (see [178, Lemma 4.3.2]), we conclude that ψ = 0. Hence,
we conclude


www.pdfgrip.com
1.2 The Wiener–Itˆ
o Chaos Expansion

15



ξ=

Jk (gk )

(1.26)

k=0

and




ξ

2
L2 (P )

=

Jk (gk )

2
L2 (P ) .

(1.27)

k=0

Finally, to obtain (1.16)–(1.17) we proceed as follows. The function gn is
defined only on Sn , but we can extend gn to [0, T ]n by putting
gn (t1 , . . . , tn ) = 0,

(t1 , . . . , tn ) ∈ [0, T ]n \ Sn .

Now define fn := gn to be the symmetrization of gn - cf. (1.5). Then
In (fn ) = n!Jn (fn ) = n!Jn (gn ) = Jn (gn )
and (1.16) and (1.17) follow from (1.26) and (1.27), respectively.
Example 1.11. What is the Wiener–Itˆo expansion of ξ = W 2 (T )? From (1.15)
we get
T t2

1 dW (t1 )dW (t2 ) = T h2


2
0

W (T )
= W 2 (T ) − T,
T 1/2

0

and therefore
ξ = W 2 (T ) = T + I2 (1).
Example 1.12. Note that for a fixed t ∈ (0, T ) we have
T
0

t2
0

T

χ{t1
W (t)dW (t2 ) = W (t) W (T ) − W (t) .

t

Hence, if we put
ξ = W (t)(W (T ) − W (t)),


g(t1 , t2 ) = χ{t1
we can see that
ξ = J2 (g) = 2J2 (g ) = I2 (f2 ),
where

1
χ
+ χ{t2 2 {t1 Here and in the sequel we denote the indicator function by
f2 (t1 , t2 ) = g(t1 , t2 ) =

χ = χA (x) = χ{x∈A} :=

1,
0,

x ∈ A,
x∈
/ A.


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×