Tải bản đầy đủ (.pdf) (40 trang)

Luận văn toán học INTRODUCTION TO COPULA AND APPLICATIONS

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (782.04 KB, 40 trang )

VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Tran Duc Anh
INTRODUCTION TO COPULA AND APPLICATIONS
Undergraduate Thesis
Undergaraduate Advanced Program In Mathematics
Ha Noi - 2012
VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Tran Duc Anh
INTRODUCTION TO COPULA AND APPLICATIONS
Undergraduate Thesis
Undergaraduate Advanced Program In Mathematics
Advisor: Dr.Tran Manh Cuong
Ha Noi - 2012

Introduction
In the last financial modelling yields estimates of the assets in the portfolio is
one of the most important issues. When researching a portfolio, they usually
determine the performance of the portfolio by the traditional method is to op-
timize the average variance, to measure the certainty of the problem.
The former method is commonly assumed property yields in the list have
the same distribution, which is easily identified and their distribution. How-
ever, in actual yields of the string properties usually abide by the rules of
distribution of different boundary conditions, so it is difficult to determine
the distribution of assets yielding string. However, we can handle by means
of copula. Copula function is the link between the one-dimensional boundary
conditions for them to become multi-dimensional distribution function.
Abe Sklar copula concept is put into the statistical probability in 1959, but


only in recent decades about 2 copula theory just developed, the demand for
applications in financial risk management. The dictionary of statistics to 1997
appeared to distance this term. According to English, copula means "parts are
connected, the connection". In probability, it is used in the sense of "connec-
tion function" of the probability distribution of a random variable with each
other more.
The n-dimensional copula is the function n variables, from on, showing the
dependence on each of a set of n random variables. The copula is the special
function with many interesting properties, and as we know, the copula can
also calculate the dependence of random variables on the covariance and cor-
relation. The theory of investment and risk covariance and correlation using
only (of the index, price, etc.) alone is not enough, but to examine their copula.
In the financial sector, the events: such as the bankruptcy case, the eco-
nomic crisis, financial market changes, many investors are interested. This
is an issue related to financial risk measurement. The essence of the financial
risk measurement is to study the consistency of the results, so people often
use the probability distribution for risk measurement. Currently, the copula
is used mainly for research in the field of financial risk measurement. For the
above reasons I choose the topic "Introduction to copula and applications" as
the thesis.
ii
Outline of this research project report
Chapter 1: In this chapter, we prepare to construct the definitions about
copulas and some tool to work with copulas. We will consider the notations
and the knowledges about random variables, random vectors with their dis-
tribution. ie. definitions, theorem, notations
Chapter 2: Introduction to the copula. Presents the definition, nature and
the theorems of the copula, the dependence, copula function profile couple
Extends the technical tools to a multivariate setting. Here we can find detail
about some copula functions. This chapter is also devote to statistical infer-

ence theory applied to copula models.
Chapter 3: Presents applications and give examples in risk management
and finance.
Because the acknowledge of author is limited and the problem for time
to do then the thesis has some mistakes. Please, give me your ideas or your
opinion that help me complete the thesis.
Thank you very much !
iii
Acknowledgements
Before presenting the content thesis I would like to express deep gratitude to
Dr.Tran Manh Cuong, University of Natural Sciences, Hanoi National Uni-
versity, who has instructed me dur ing study and research, for his patience,
motivation, enthusiasm, and immense knowledge. He has taught me so much
about conducting academic research and writing career planning. The advice,
support and friendship of his have been invaluable on both an academic and
a personal level, for which I am extremely grateful.
I also would like to express sincere gratitude to all teachers in the Faculty of
Mathematics-Mechanics-Informatics, University of Natural Sciences, Hanoi
National University, has dedicated teaching me during learning. I take this
opportunity sincerely to thank family, friends were always cheering, encour-
aging, enabling you to complete this thesis.
Tran Duc Anh
Ha Noi,December, 2012
iv
Contents
Contents v
1 Preliminaries 1
1.1 Introduction some notations . . . . . . . . . . . . . . . . . . . . . 1
1.2 Random variables, Random vectors and Their Distributions . . 1
1.2.1 Joint and marginal distributions . . . . . . . . . . . . . . 2

1.2.2 Conditional distributions and independence . . . . . . . . 3
1.2.3 Moments and characteristic function . . . . . . . . . . . . 4
1.2.4 Empirical Distribution . . . . . . . . . . . . . . . . . . . . 4
1.3 The notion of a n-increasing function . . . . . . . . . . . . . . . . 5
2 Introduction to Copula 8
2.1 Definitions and properties . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Sklar’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Copulas and random variables . . . . . . . . . . . . . . . . . . . . 14
2.4 Introduction some copula functions . . . . . . . . . . . . . . . . . 17
2.4.1 Elliptical copulas . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Archimedean copulas . . . . . . . . . . . . . . . . . . . . . 19
2.5 Statistic inference for copula . . . . . . . . . . . . . . . . . . . . . 20
2.5.1 Exact Maximum Likelihood Method . . . . . . . . . . . . 20
2.5.2 CML Method . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.3 Non-parametric Estimation . . . . . . . . . . . . . . . . . 22
3 Applications of Copula 24
3.1 Pricing multi-asset options using Lévy copulas . . . . . . . . . . 24
3.2 Compute in risk management . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Discrete case . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.2 Continuous case . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Bibliography 33
v
CHAPTER 1
Preliminaries
1.1 Introduction some notations
.
We will let R denote the ordinary real line (−∞, +∞), R denote the ex-
tended real line [−∞, +∞)], and R
n

denote the extended n-space R ×R × ×
R. We will use vector notation for point in R
n
, e.g., a =(a
1
, a
2
, , a
n
), and we
will write a ≤ b when a
k
≤ b
k
for all k; a < b when a
k
< b
k
∀ k.
A n-box in R
n
is the Cartesian product of n closed intervals: B= [a
1
, b
1
] ×
[a
2
, b
2

] × ×[a
n
, b
n
]. The vertices of an n-box B are point c =(c
1
, c
2
, , c
n
) where
each c
k
is equal to their a
k
or b
k
.
An n-place real function H is a function whose domain, DomH is subset of
R
n
and whose range, RanH is a subset of R. The unit n-cube I
n
is the product
I ×I × ×I.
1.2 Random variables, Random vectors and
Their Distributions
.
A random variable is a quantity whose values are described by a probability
distribution function. We will use capital letter, such as X or Y to represent

random variables, and lower-case letters x, y to represent their values.
Example 1.2.1 Suppose that our experiment consists of tossing 3 fair coins. If we let
Y denote the number of heads appearing, then Y is random variables taking on one of
the values 0, 1, 2, 3 with respective probabilities.
P{Y = 0} = P{(T, T, T)} =
1
8
P{Y = 1} = P{(H, T, T), (T, H, T), (H, T, T)} =
3
8
P{Y = 2} = P{(T, H, H), (H, H, T), (H, T, H)} =
3
8
1
1.2. Random variables, Random vectors and Their Distributions
P{Y = 3} = P{(H, H, H)} =
1
8
Since Y must take on one of the values 0 through 3, we must have:
1 = P(∪
3
i=0
{Y = i}) =
3

i=0
P{Y = i}).
We also say that F is distribution function of the random variable X when for
all x in R,
F(x) = P[X ≤ x]. (1.1)

In here, we consider two important distributions which is the much relation-
ship to copula.
1.2.1 Joint and marginal distributions
Thus far, we have only concerned ourselves with probability distributions for
single random variables. However, we often interested in probability state-
ments concerning two or more random variables.In order to deal with such
properties, we define the joint cumulative probability dis tributions f unction of
random vector X. Let X = (X
1
, X
2
, , X
n
) with X
i
, i = 1, 2, , n are one-dimension
random variables, then X is called n −dimensi on ra ndom vector. The distribu-
tion function of X is completely described by the joint distribution function
:
F
X
(x) = P{X
1
< x
1
, X
2
< x
2
, , X

n
< x
n
}, x
i
∈ R, i = 1, , n. (1.2)
Where no ambiguity arises we simply write F, omitting the subscript.
In specially, X
i
, i = 1, 2, , n are independent. We have:
F
X
= P{X
1
< x
1
}P{X
2
< x
2
} P{X
n
< x
n
} = F
X
1
(x
1
)F

X
2
(x
2
) F
X
n
(x
n
). (1.3)
In additions,when X
i
, i = 1, 2, , n are discrete random variables, it is conve-
nient to define the joint probability mass f unction :
p
X
= P{X
1
= x
1
}P{X
2
= x
2
} P{X
n
= x
n
}. (1.4)
Example 1.2.2 Suppose that 3 balls are randomly selected from an urn containing

3 red, 4 white and 5 blue balls. If we let X and Y denote, respectively, the number
of red and white balls chosen, then the joint probability mass functions of X and Y,
p(i, j) = P {X = i, Y = j}, is given by:
p(0, 0) =

5
3


12
3

=
10
220
, p(0, 1) =

4
1

5
2


12
3

=
40
220

,
p(0, 2) =

4
2

5
1


12
3

=
30
220
, p(0, 3) =

4
3

12
3


12
3

=
4

220
,
2
1.2. Random variables, Random vectors and Their Distributions
p(1, 0) =

3
1

5
2


12
3

=
30
220
, p(1, 1) =

3
1

4
1


12
3


=
60
220
,
p(1, 2) =

3
1

4
2


12
3

=
18
220
, p(2, 0) =

3
2

5
1


12

3

=
15
220
,
p(2, 1) =

3
2

4
1


12
3

=
12
220
, p(3, 0) =

3
3


12
3


=
1
220
.
The marginal distr ibution function of X
i
, written F
X
i
or often simply F
i
,
is the distribution function of that risk factor considered individually and is
easily calculated from the joint distribution function. For all i we have:
F
x
1
= P(X
i
) = F(∞, , ∞, x
i
, ∞, , ∞) (1.5)
1.2.2 Conditional distributions and independence
If we have a multivariate model for risks in the form of a joint distribution
functions, survival function or density, then we have implicitly described their
dependence structure. We can make conditional probability statements about
the probability that certain components take certain values given that other
components take other values. For example, consider again our partition of
X into (X


1
, X

2
) and assume absolute continuity of the distribution function of
X. Let f
X
1
denote the joint density of the k-dimensional marginal distribution
F
X
1
. Then the conditional distribution of X
2
given X
1
= x
1
has density
f
X
2
|X
1
(x
2
|x
1
) =
f (x

1
, x
2
)
f
X
(x
1
)
, (1.6)
and the corresponding disribution function is
f
X
2
|X
1
(x
2
|x
1
) =

x
k+1
u
k+1
=−∞


x

d
u
d
=−∞
f (x
1
, , x
k
, u
k+1
, , u
d
)
f
X
1
(x
1
)
d
u
k+1
d
u
d
(1.7)
If the joint density of X factorizes into f (x) = f
X
1
(x

1
) f
X2
(x
2
), then the con-
ditional distribution and density of X
2
given X
1
= x
1
are identical to the
marginal distribution and density of X
2
: in otherwords,X
1
and X
2
are inde-
pendent. We recall that X
1
and X
2
are independent if and only if
F(x) = F
X
1
(x
1

)F
X
2
(x
2
), ∀x,
or, in the case where X possesses a joint density, f (x) = f
X
1
(x
1
) f
X
2
(x
2
).
3
1.2. Random variables, Random vectors and Their Distributions
1.2.3 Moments and characteristic function
The mean vector of X, when it exists, is given by
E(X) := (E(X
1
), , E(X
d
)).
The covariance matrix, when it exists, is the matrix cov(X) defined by
cov(X) = E((X −E (X))(X − E(X)),
where the expectation operator acts componentwise on matrices. If we write


for cov(X), then the (i,j)th element of this matrix is
σ
ij
= cov(X
i
X
j
) = E(X
i
X
j
) − E(X
i
)E(X
j
),
the ordinary pairwise covariance between Xi and Xj . The diagonal elements
σ
11
, , σ
dd
are the variances of the components of X.
The correlation matrix of X, denoted by ρ(X), can be defined by introducing
a standardized vector Y such that Y
i
= X
i

varX
i

for all for all i and taking
ρ(X) = cov(Y). If we write P for ρ(X), then the (i,j)th element of this matrix is
ρ
ij
= ρ(X
i
, X
j
) =
cov(X
i
, X
j
)

var(X
i
)varX
j
, (1.8)
the ordinary pairwise linear correlation of X
i
and X
j
. To express the relation-
ship between correlation and covariance matrices in matrix form it is useful
to introduce operators on a covariance matrix

as follows:
∆(Σ) = diag(


σ
11
, ,

σ
dd
), (1.9)
℘(Σ) = (∆(Σ))
−1
Σ(∆(Σ))
−1
. (1.10)
Thus ∆(Σ ) extracts from Σ a diagonal matrix of standard deviations, and ℘(Σ)
extracts a correlation matrix. The covariance and correlation matrices Σ and
P of X are related by
P = ℘(Σ) (1.11)
Mean vectors and covariance matrices are manipulated extremely easily un-
der linear operations on the vector X. For any matrix B ∈ R
k×d
and vector
b ∈ R
k
we have
E(BX + b) = BE(X) + b, (1.12)
cov(BX + b) = Bcov(X)B

. (1.13)
1.2.4 Empirical Distribution
Assume that we have a sample space of random variable X = (x

1
, x
2
, , x
n
).
From n-values x
i
of X, we constructed the function:
F
n
(x) =
f {x
i
< x}
n
(1.14)
4
1.3. The notion of a n-increasing function
where f {x
i
< x} is number of sample values x
i
which is less than x. When
x change, we obtain function F
n
(x) to real variable x. The function is called
empiric distr ibution.
By the different samples we realize any different empiric distribution func-
tions. They have the trapezium graphs, and all of them have a common prop-

erty is: They come a fiducial distribution function, when the size of sample
increasing to infinity.
1.3 The notion of a n-increasing function
To accomplish what we have outline about copula, we need to generalize the
notion of "nondecreasing" for univariate functions to a concept applicable to
multivariate function.
Def inition 1.3.1 Let S
1
, S
2
, , S
n
be non-empty subsets of R, and let H be an n-
place real function such that DomH= S
1
×S
2
× ×S
n
. Let B=[a,b] be an n-box all
of whose vertices are in DomH. Then the H −Volume of B is given by
V
H
(B) =

sgn(c)H(c), (1.15)
where the sum is taken over all vertices c of B, and sgn(c) is given by
sgn(c) =

1 c

k
= a
k
for an even number of k

s
−1 c
k
= a
k
for an odd number of k

s
(1.16)
Equivalently, the H-volume of an n-box B=[a,b] is the nth order difference of
H on B
V
H
(B) = ∆
b
a
H(t) = ∆
a
n
b
n

a
n−1
b

n−1

a
2
b
2

a
1
b
1
H(t), (1.17)
where we define the first order differences of an n-place function as

b
a
H(t) = H(t
1
, , t
k−1
, b
k
, t
k+1
, , t
n
) − H(t
1
, , t
k−1

, a
k
, t
k+1
, , t
n
). (1.18)
For example, if H(x
1
, , x
n
) = P(X
1
 x
1
, , X
n
 x
n
) is probability simulate
distribution function of n random variables X
1
, , X
n
then we have:
V
H
(B) = P(a
1
 X

1
 b
1
, , a
n
 X
n
 b
n
). (1.19)
Def inition 1.3.2 An n-place real function H is n −increasing if V
H
(B) ≥ 0 for all
n-boxes B whose vertices lie in DomH.
Suppose that the domain of an n -place real function H is given by DomH=S
1
×
S
2
× × S
n
where each S
k
has a least element a
k
. We say that H is grounde d
if H(t) = 0 for all t in DomH such that t
k
= a
k

for at least one k. If each S
k
is
non-empty and has a greatest element b
k
, then we say that H has margins, and
the one− dimensional margins of H are the functions H
k
given by DomH
k
=S
k
and H
k
(x) = H(b
1
, , b
k−1
, x
k
, b
k+1
, , b
n
) for all x in S
k
. Higher dimensional
margins are defined by fixing fewer places in H.
5
1.3. The notion of a n-increasing function

In bivariate case, instead by boxes we use a rectangle B and then we can
have:
The volume of rectangle B (the second order difference of H on B) is
V
H
(B) = ∆
y
2
y
1

x
2
x
1
H(x, y)
and when V
H
(B) ≥ 0, The real function H is called 2-increasing for all rectan-
gles B whose vertices lie in DomH.
Example 1.3.3 Let H be the function defined on I
2
by H(x, y) = max(x, y). Then
H is a nondecreasing function of x and y. However, V
H
(I
2
) = −1, so that H is not
2-incerasing.
We will close this section with two important lemmas concerning grounded

2-increasing functions with margin.
Lemma 1.3.4 Let S
1
, S
2
be non-empty subsets of R, and let H be a grounded
2-increasing function with domain S
1
× S
2
. Then H is nondecreasing in each
argument.
PROOF Let a
1
,a
2
denote the least elements of S
1
, S
2
,respectively,and set (x
1
=
a
1
, x
2
) be in S
1
with x

1
≤ x
2
, (y
1
= a
2
, y
2
) be in S
2
with y
1
≤ y
2
. Then the
function t → H(t, y
2
) − H(t, y
1
) is nondecreasing on S
1
and the function t →
H(x
2
, t) − H(x
1
, t) is nondecreasing on S
2
.

Lemma 1.3.5 Let S
1
, S
2
be non-empty subsets of R, and let H be a grounded
2-increasing function with margins whose domain is S
1
× S
2
. Let (x
1
, y
1
) and
(x
2
, y
2
) be any points in S
1
×S
2
. Then
|H(x
2
, y
2
) − H(x
1
, y

1
)|  |F(x
2
) − F(x
1
)|+ |G(y
2
) − G(y
1
)|. (1.20)
where the functions F and G are margin of the grounded 2-increasing function
H.
PROOF
|H(x
2
, y
2
) − H(x
1
, y
1
)|  |H(x
2
, y
2
) − H(x
1
, y
2
)|+ |H(x

1
, y
2
) − H(x
1
, y
1
)|.
(1.21)
Now assume x
1
≤ x
2
. Because H is grounded, 2-increasing, and has mar-
gins, above lemma yield 0 ≤ H(x
2
, y
2
) − H(x
1
, y
1
) ≤ F(x
2
) − F(x
1
. An anal-
ogous inequality holds when x
2
≤ x

1
, hence it follows that for any x
1
, x
2
in S
1
, |H(x
2
, y
2
) − H(x
1
, y
2
)| ≤ |F(x
2
) − F(x
1
|. Similarly for any y
1
, y
2
in S
2
,
|H(x
1
, y
2

) − H(x
1
, y
1
)| ≤ |G(y
2
) − G(y
1
|. Which completes the proof.
Example 1.3.6 Let H be the function with domain [0, 1] ×[2, 4] given by
H(x, y) =
(e
x
−1)(y −2)
e
x
+ y
(1.22)
Then H is grounded because H(x, 2) = 0 = H(0, y) and H has margin F(x) and G(y)
given by:
F(x) = H(x, 4) =
2(e
x
−1)
e
x
+ 4
(1.23)
6
1.3. The notion of a n-increasing function

and
G(y) = H(1, y) =
(y −2)(e −1)
e + y
(1.24)
7
CHAPTER 2
Introduction to Copula
2.1 Definitions and properties
In section we discus about definitions and basic properties of the Copulas.
To give the definition of copulas we need to define subcopulas as a certain
class of grounded 2-increasing functions with margins. And then we talk some
properties about continuous and bounds.
We define copulas as subcopulas with domain I
2
.
Def inition 2.1.1 A 2-dimensional subcopula (2-subcopula) is a function C’ with the
following properties:
1.DomC’=S
2
×S
2
, where S
1
and S
2
are subsets of I
2
containing 0 and 1;
2.C’ is grounded and 2-increasing;

3.For every u in S
1
and every v in S
2
,
C

(u, 1) = u and C

(1, v) = v .
Where for every (u,v) in DomC’, 0 ≤ C

(u, v) ≤ 1, so that RanC’ is also a
subsets of I. Now, we define the Copula be the definition.
Def inition 2.1.2 An 2-dimensional copula (2-copula or briefly, a copula) is a 2-
subcopula C whose domain I
2
.
Equivalent, an copula is a function C from I
2
to I with the following properties:
1. For every u, v in I,
C(u, 0) = 0 = C(0, v) (2.1)
and
C(u, 1) = u , C(1, v) = v; (2.2)
2. For every u
1
, u
2
, v

1
, v
2
in I such that u
1
 u
2
and v
1
 v
2
,
C(u
2
, v
2
) −C(u
2
, v
1
) −C(u
1
, v
2
) + C(u
1
, v
1
) ≥ 0. (2.3)
The distinction between a subcopula and a copula (the domain) may appear

8
2.1. Definitions and properties
to be a minor one, but it will be rather important in the next section when
we discuss Sklar’s theorem. In addition, many of the important properties of
copulas are actually properties of subcopulas.
Theorem 2.1.3 Let C’ be a subcopula. Then for every (u,v) in DomC’,
max(u + v −1, 0) ≤ C

(u, v) ≤ min(u, v). (2.4)
PROOF Let (u,v) be an arbitrary point in DomC’. Now C

(u, v) ≤ C

(u, 1) = u
and C

(u, v) ≤ C

(1, v) = v yield C

(u, v) ≤ min(u, v). Furthermore, V
C

([u, 1] ×
[v, 1]) ≥ 0 implies C

(u, v) ≥ u + v −1, which when combined with C

(u, v) ≥ 0
yields C


(u, v) ≥ max(u + v −1, 0).
Because every copula is a subcopula, the inequality in the above theorem
holds for copulas, then we can have the Frechet-Hoeffding Theorem (after
Maurice Rene Frechet and Wassily Hoeffding) states that for any Copula C:
I
N
−→ I and any (u
1
, u
2
, , u
N
) in I
N
the following bounds hold:
W(u
1
, u
2
, , u
N
) ≤ C(u
1
, u
2
, , u
N
) ≤ M(u
1

, u
2
, , u
N
). (2.5)
The function M is called upper Frechet-Hoeffding bound and is defined
as:
M(u
1
, u
2
, , u
N
) = min(u
1
, u
2
, , u
N
). (2.6)
The upper bound is sharp: M is always a copula, random variables with
copula M are of ten called comonotonic.
The function W is called lower Frechet-Hoeffding bound and is defined as:
W(u
1
, u
2
, , u
N
) = max(

N

i=1
u
i
− N + 1, 0) . (2.7)
The function M is not a copula for all N ≥ 3, random variables with copula
M are often called countermonotonic.
In two variables case we easy to realize : M(u, v) = min(u, v) and W(u, v) =
max(u + v −1, 0).
Example 2.1.4 Consider the n-dimension cubic B = [
1
2
, 1]
n
⊂ [0, 1]
n
: easy to see
that
V
W
(B) = 1 −
n
2
+ 0 + + 0. (2.8)
Hence, if n ≥ 3 W is not n-increasing and then is not a copula.
Theorem 2.1.5 For every n ≥ 3 and for all u ⊂ [0, 1
n
] has n-copula C then:
C(u) = W(u). (2.9)

The following theorem, which follows directly from Lemma 1.3.5, estab-
lishes the continuity of subcopulas (and hence of copulas) via a Lipschitz con-
dition on I
2
Theorem 2.1.6 Let C’ be a subcopula. Then for every (u
1
, u
2
), (v
1
, v
2
) in DomC’,
|C

(u
2
, v
2
) −C

(u
1
, v
1
)| ≤ |u
2
−u
1
|+ |v

2
−v
1
|. (2.10)
Hence C’ is uniformly continuous on its domain.
9
2.2. Sklar’s theorem
2.2 Sklar’s theorem
The theorem in the title of this section is central to the theory of copulas and is
the foundation of many, if not most, of the applications of that theory to statis-
tics. Sklar’s theorem elucidates the role that copulas play in the relationship
between multivariate distribution functions and their univariate margins.
Theorem 2.2.1 (Skalar’s theorem in n-dimensions) Let H be an n-dimensional
distribution function with margins F
1
, , F
N
. Then there exist an n-copula C
such that for all x in R
n
.
H(x
1
, , x
n
) = C(F
1
(x
1
), , F

n
(x
n
)) (2.11)
If F
1
, , F
N
are all continuous, then C is unique; otherwise, C is uniquely de-
termined on RanF
1
×RanF
2
× ×RanF
n
. Conversely, if C is an n-copula and
F
1
, , F
N
are distribution functions, then the function H defined by (1.13) is an
n-dimensional distribution function with margins F
1
, , F
N
.
In here, we we prove the theorem in bivariate case, and use two lemmas to
prove that. The proof of the n-dimensional extension lemma, in which one
shows that every n-subcopula can be extended to an n-copula, proceeds via
a “multilinear interpolation” of the subcopula to a copula similar to two-

dimensional version in second lemma. The proof in the n-dimensional case,
however, is somewhat more involved (Moore and Spruill 1975; Deheuvels 1978;
Sklar 1996).
We rewrite the theorem in bivariate case.
Theorem 2.2.2 Let H be a joint distribution function with margins F and G.
Then there exists a copula C such that for all x,y in R,
H(x, y) = C(F(x), G(y)). (2.12)
If F and G are continuous, then C is unique; otherwise, C is uniquely deter-
mined on RanF × RanG. Conversely, if C is a copula and F and G are distribu-
tion functions, then the function H defined by above form is a joint distribution
function with margins F and G.
Now, consider the first lemma.
Lemma 2.2.3 Let H be a joint distribution function with margins F and G.
Then there exists a unique subcopula C’ such that
1. DomC’ = RanF × RanG,
2. For all x,y in R, H(x,y) = C’(F(x),G(y)).
PROOF The joint distribution H satisf ies the hypotheses of Lemma 1.3.5 with
S
1
= S
2
= R. Hence for any points (x
1
, y
1
) and (x
2
, y
2
) in R

2
,
|H
x
− H
y
|  |F(x
1
) − F(x
2
)|+ |G(y
1
) − G(y
2
)|. (2.13)
10
2.2. Sklar’s theorem
It follows that if F(x
1
) = F(x
2
) and G(y
1
) = G(y
2
), then H(x
1
, y
1
) = H(x

2
, y
2
).
Thus the set of ordered pairs
{((F(x), G(y)), H(x, y))|x, y ∈ R}
defines a 2-place real function C’ whose domain is RanF × RanG. That this
function is a subcopula follows directly from the properties of H. For instance,
to show that the definition holds, we first note that for each u in RanF, there
is an x in R such that F(x) = u. Thus C

(u, 1) = C

(F(x), G(∞)) = H(x, ∞) =
F(x) = u. Verifications of the other conditions in Definition are similar.
The second lemma.
Lemma 2.2.4 Let C’ be a subcopula. Then there exists a copula C such that
C(u,v) = C’(u,v) for all (u,v) in DomC’; i.e., any subcopula can be extended to a
copula. The extension is generally non-unique.
PROOF .Let DomC

= S
1
×S
2
. Using the theorem:
Theorem 2.2.5 Let C’ be a subcopula. Then for every (u
1
, u
2

), (v
1
, v
2
) in DomC’,
|C

(u
2
, v
2
) −C

(u
1
, v
1
)| ≤ |u
2
−u
1
|+ |v
2
−v
1
|
Hence C’ is uniformly continuous on its domain and the fact that C’ is nonde-
creasing in each place, we can extend C’ by continuity to a function C" with
domain S
1

×S
2
, where S
1
is the closure of S
1
and S
2
is the closure of S
2
. Clearly
C" is also a subcopula. We next extend C" to a function C with domain I
2
. To
this end, let (a,b) be any point in I
2
, let a
1
and a
2
be, respectively, the greatest
and least elements of S
1
that satisfy a
1
≤ a ≤ a
2
; and let b
1
and b

2
be, respec-
tively, the greatest and least elements of S
2
that satisfy b
1
≤ b ≤ b
2
. Note that
if a is in S
1
, then a
1
= a = a
2
; and if b is in S
2
, then b
1
= b = b
2
. Now let
λ
1
=

(a − a
1
)/(a −a
2

), i f a
1
< a
2
,
1, i f a
1
= a
2
;
µ
1
=

(b −b
1
)/(b −b
2
), i f b
1
< b
2
,
1, i f b
1
= b
2
;
and define
C(a, b) = (1 − λ

1
(1 −µ
1
)C”(a
1
, b
1
) + (1 − λ
1

1
C”(a
1
, b
2
)

1
(1 −µ
1
)C”(a
2
, b
1
) + λ
1
µ
1
C”(a
2

, b
2
).
Because λ
1
and µ
1
are linear in a and b, respectively. Notice that the interpo-
lation defined for C(a, b) is linear in each place. And we have the DomC=I
2
,
that C(a, b) = C”(a, b) for any (a, b) in DomC”. By using the definition to prove
this lemma, then now we must to show that the number so assigned must be
nonnegative follow (2.3). To accomplish this, let (c, d) be another point in I
2
such that c ≥ a and d ≥ b, and let c
1
, d
1
, c
2
, d
2
, λ
2
, µ
2
be related to c and d as
11
2.2. Sklar’s theorem

a
1
, b
1
, a
2
, b
2
, λ
1
, µ
1
are related to a and b. In evaluating V
C
(B) for the rectan-
gle B = [a, c] × [b, d ], there will be several cases to consider, depending upon
whether or not there is a point in S
1
strictly between a and c, and whether
or not there is a point in S
2
strictly between b and d. In the simplest of these
cases, there is no point in S
1
strictly between a and c, and no point in S
2
strictly
between b and d, so that c
1
= a

1
, c
2
= a
2
, d
1
= b
1
, and d
2
= b
2
. Substituting
C(a, b) that we just defined in above and the corresponding terms for C(a , d),
C(c, b) and C(c, d) into the expression given by (1.17) for V
C
(B) and simplifying
yields
V
C
(B) = V
C
([a, c] ×[b, d]) = (λ
2
−λ
1
)(µ
2
−µ

1
)V
C
([a
1
, a
2
] ×[b
1
, b
2
]),
from which it follows that V
C
(B) ≥ 0 in this case, as c ≥ a and d ≥ b imply
λ
2
≥ λ
1
and µ
2
≥ µ
1
.
Figure 2.1.The least simple case in the proof of Lemma 2.2.4.
.
At the other extreme, the least simple case occurs when there is at least
one point in S
1
strictly between a and c, and at least one point in S

2
strictly
between b and d, so that a < a
2
≤ c
1
< c and b < b
2
≤ d
1
< d. In this case
— which is illustrated in Fig. 2.1 — substituting C(a, b) in above and the cor-
responding terms for C(a, d), C(c, b) and C(c, d) into the expression given by
(1.17) for V
C
(B) and rearranging the terms yields
V
C
(B) = (1 − λ
1

2
V
C
([a
1
, a
2
] ×[d
1

, d
2
]) + µ
2
V
C
([a
2
, c
1
] ×[d
1
, d
2
])

2
µ
2
V
C
([c
1
, c
2
] ×[d
1
, d
2
]) + (1 −λ

1
)V
C
([a
1
, a
2
] ×[b
2
, d
1
])
+V
C
([a
2
, c
1
] ×[b
2
, d
1
]) + λ
2
V
C
([c
1
, c
2

] ×[b
2
, d
1
])
+(1 −λ
1
)(1 −µ
1
V
C
([a
1
, 1
2
] ×[b
1
, b
2
])
+(1 −µ
1
)V
C
([a
a
, c
1
] ×[b
1

, b
2
]) + λ
2
(1 −µ
1
)V
C
([c
1
, c
2
] ×[b
1
, b
2
]).
The right-hand side of the above expression is a combination of nine non-
negative quantities (the C-volumes of the nine rectangles determined by the
dashed lines in Fig. 2.1) with nonnegative coefficients, and hence is nonnega-
tive. The remaining cases are similar, which completes the proof.
12
2.2. Sklar’s theorem
We are now ready to prove Sklar’s theorem,
PROOF The existence of a copula C such that definition holds for all x,y in R
follows from Lemmas 2.2.3 and 2.2.4. If F and G are continuous, then RanF =
RanG = I, so that the unique subcopula C’ is a copula. The converse is a
matter of straightforward verification.
Corollar y 2.2.6 Let H,C,F
1

, , F
N
be as in Sklar’s theorem, and let F
−1
1
, , F
−1
N
be quasi-inverse of F
1
, , F
N
respectively.
Then for any u in [0, 1]
n
,
C(u
1
, u
1
, , u
n
) = C(F
−1
1
(x
1
), F
−1
2

(x
2
), , F
−1
n
(x
n
). (2.14)
Remark 2.2.7 : Sklar’s theorem elucidates the role that copula play in the relationship
between multivariate distribution functions and their univariate margin.
Example 2.2.8 Let (a, b) be any point in R
2
, and consider the following distribution
function H:
H(x, y) =

0, x < a or y < b,
1, x ≥ a and y ≥ b,
The margins of H are the unit step functions F
a
and G
b
. Applying Lemma 2.2.3 yields the
subcopula C

with domain {0, 1}×{0, 1} such that C

(0, 0) = C

(0, 1) = C


(1, 0) = 0
and C

(1, 1) = 1. The extension of C

to a copula C via Lemma 2.2.4 is the copula
C = Π, i.e., C(u,v) = uv. Notice however, that every copula agrees with C

on its
domain, and thus is an extension of this C

.
Example 2.2.9 Let H be the function with domain R
2
given by
H(x, y) =







(1 + x)(e
y
−1)
x + 2e
y
−1

, (x, y) ∈ [−1, 1] × [0, ∞]
1 −e
y
, (x, y) ∈ (1, ∞] × [0, ∞],
0, elsewhere.
with margins F and G given by
F(x) =







0, x < −1,
(x + 1)
2
, x ∈ [−1, 1],
1, x > 1,
and G(y) =

0, y < 0,
1 −e
y
, y ≥ 0.
Quasi-inverses of F and G are given by F
−1
(u) = 2u −1 and G
−1
(v) = −ln (1 −v)

for u, v in I. Because Ran F = RanG = I, (corollary 1.3.3) yields the copula C given by
C(u, v) =
uv
u + v − uv
.
13
2.3. Copulas and random variables
2.3 Copulas and random variables
We are now in a position to restate Sklar’s theorem in terms of random vari-
ables and their distribution functions:
Theorem 2.3.1 Let X and Y be random variables with distribution functions
F and G, respectively, and joint distribution function H. Then there exists a
copula C such that Sklar’s theorem holds. If F and G are continuous, C is
unique. Otherwise, C is uniquely determined on RanF × RanG.
The copula C in above theorem will be called the copula of X and Y, and de-
noted C
XY
when its identification with the random variables X and Y is ad-
vantageous.
The following theorem shows that the product copula Π(u, v) = uv char-
acterizes independent random variables when the distribution functions are
continuous. Its proof follows from Theorem 2.3.1 and the observation that X
and Y are independent if and only if H(x, y) = F(x)G(y) for all x, y in R
2
. Then
we have the theorem:
Theorem 2.3.2 Let X and Y be continuous random variables. Then X and Y
are independent if and only if C
XY
= Π.

Hence in the case of continuous margins, it is natural to define the notion of
the copula of a distribution.
Def inition 2.3.3 If the random vector (X, Y) has joint distribution function with
continuous marginal distributions F and G , then the copula of F (or X) is the distribution
function C of (F(X), G(Y)).
Discrete distributions. The copula concept is slightly less natural for multi-
variate discrete distributions. This is because there is more than one copula
that can be used to join the margins to form the joint distribution function, as
the following example shows.
Example 2.3.4 Let (X
1
, X
2
) have a bivariate Bernoulli distribution satisfying
P(X
1
= 0, X
2
= 0) =
1
8
, P(X
1
= 1, X
2
= 1) =
3
8
,
P(X

1
= 0, X
2
= 1) =
2
8
, P(X
1
= 1, X
2
= 0) =
2
8
.
Clearly, P(X
1
= 0) = P(X
2
= 0) =
3
8
and the marginal distributions F1 and F
2
of X
1
and X
2
are the same. From Sklar’s Theorem we know that
P(X
1

≤ x
1
, X
2
≤ x
2
) = C(P(X
1
≤ x
1
), P(X
2
≤ x
2
))
for all x
1
, x
2
and some copula C. Since RanF
1
= RanF
2
= {0,
3
8
, 1}, clearly the only
constraint on C is that C(
3
8

,
3
8
) =
1
8
. Any copula fulfilling this constraint is a copula of
(X
1
, X
2
), and there are infinitely many such copulas.
14
2.3. Copulas and random variables
A useful property of the copula of a distribution is its invariance under str ictly
increasing transformations of the marginals. In view of Sklar’s Theorem and
this invariance property, we interpret the copula of a distribution as a very
natural way of representing the dependence structure of that distribution,
certainly in the case of continuous margins.
Theorem 2.3.5 Let X and Y be continuous random variables with copula C
XY
. If α and β are strictly increasing functions on RanX and RanY, respectively,
then C
α(X)β(Y)
= C
XY
. Thus C
XY
is invariant under strictly increasing transfor-
mations of X and Y.

PROOF Let F
1
, G
1
, F
2
, and G
2
denote the distribution functions of X, Y, α(X),
and β(Y), respectively. Because α and β are strictly increasing, F
2
(x) = P[α(X) ≤
x] = P[X ≤ α
−1
(x)] = F
1

−1
(x)), and likewise G
2
(y) = G
1

−1
(y)). Thus, for
any x, y in R,
C
α(X)
(F
2

(x), G
2
(y)) = P[α(X) ≤ x, β(Y) ≤ y]
= P[X ≤ α
−1
(X), Y ≤ β
−1
(Y)]
= C
XY
(F
1

−1
(x)), G
1

−1
(y)))
= C
XY
(F
2
(x), G
2
(y)).
Because X and Y are continuous, RanF
2
= RanG
2

= I, whence it follows that
C
α(X)β(Y)
= C
XY
on I
2
.
When at least one of α and β is strictly decreasing, we obtain results in which
the copula of the random variables α(X) and β(Y) is a simple transformation
of C
XY
. Specifically, we have:
Theorem 2.3.6 Let X and Y be continuous random variables with copula C
XY
.
Let α and β be strictly monotone on RanX and RanY , respectively.
1. If α is strictly increasing and β is strictly decreasing, then
C
α(X)β(Y)
(u, v) = u −C
XY
(u, 1 − v).
2. If α is strictly decreasing and β is strictly increasing, then
C
α(X)β(Y)
(u, v) = v −C
XY
(1 −u, v).
3. If α and β are both strictly decreasing, then

C
α(X)β(Y)
(u, v) = u + v −1 + C
XY
(1 −u, 1 −v).
Although we have chosen to avoid measure theory in our definition of random
variables. Each joint distribution function H induces a probability measure
on R
2
via V
H
((−∞, x] × (−∞, y] ) = H(x, y) and a standard extension to Borel
subsets of R
2
using measure-theoretic techniques. Because copulas are joint
distribution functions (with uniform (0,1) margins), each copula C induces a
probability measure on I
2
via V
C
([0, u] × [0, v] ) = C(u, v) in a similar fash-
ion—that is, the C measure of a set is its C-volume V
C
. Hence, at an intuitive
15
2.3. Copulas and random variables
level, the C measure of a subset of I
2
is the probability that two uniform (0,1)
random variables U and V with joint distribution function C assume values

in that subset. C-measures are often called doubly stochastic measures, as for
any measurable subset S of I, V
C
(S × I) = V
C
(I × S) = λ(S), where λ denotes
ordinary Lebesgue measure on I. The term “doubly stochastic” is taken from
matrix theory, where doubly stochastic matrices have nonnegative entries and
all row sums and column sums are 1.
For any copula C, let
C(u, v) = A
C
(u, v) + S
C
(u, v),
where
A
C
(u, v) =

u
0

v
0

2
∂s∂t
C(s, t)dtds,
and

S
C
(u, v) = C(u, v) − A
C
(u, v).
Unlike bivariate distributions in general, the margins of a copula are contin-
uous, hence a copula has no “atoms” (individual points in I
2
whose C-measure
is positive).
If C ≡ AC on I
2
—that is, if considered as a joint distribution function, C
has a joint density given by ∂
2
C(u, v)/∂u∂v — then C is absolutely continuous,
whereas if C ≡ S
C
on I
2
— that is, if ∂
2
C(u, v)/∂u∂v = 0 almost everywhere in
I
2
— then C is si ngular. Otherwise, C has an absolut ely continuous component
A
C
and a singular component S
C

. In this case neither A
C
nor S
C
is a copula,
because neither has uniform (0,1) margins. In addition, the C-measure of the
absolutely continuous component is A
C
(1,1), and the C-measure of the singu-
lar component is S
C
(1,1).
Just as the support of a joint distribution function H is the complement
of the union of all open subsets of R
2
with H-measure zero, the support o f a
copula is the complement of the union of all open subsets of I
2
with C-measure
zero. When the support of C is I
2
, we say C has “full support.” When C is
singular, its support has Lebesgue measure zero (and conversely). However,
many copulas that have full support have both an absolutely continuous and
a singular component.
Example 2.3.7 The support of the Fréchet-Hoeffding upper bound M is the main
diagonal of I
2
, i.e., the graph of v = u for u in I, so that M is singular. This follows
from the fact that the M-measure of any open rectangle that lies entirely above or below

the main diagonal is zero. Also note that ∂
2
M/∂u∂v = 0 everywhere in I
2
except on
the main diagonal. Similarly, the support of the Fréchet-Hoeffding lower bound W is the
secondary diagonal of I
2
, i.e., the graph of v = 1˘u for u in I, and thus W is singular
as well.
Example 2.3.8 The product copula Π(u, v) = uv is absolutely continuous, because
for all (u, v) in I
2
,
A
Π
(u, v) =

u
0

v
0

2
∂s∂t
Π(s, t)dtds =

u
0


v
0
1dtds = uv = Π(u, v)
16
2.4. Introduction some copula functions
2.4 Introduction some copula functions
In particular, we can see many different copulas but in here, we will consider
two typical of these. The Elliptical Copulas (Gaussian Copulas, t-Student Cop-
ulas ) and several copulas in Archimedian Copulas (Clayton Copulas,Gumbel
Copulas, Frank Copulas ).
2.4.1 Elliptical copulas
2.4.1.1 Elliptical Distribution
.
Elliptical distribution is expanding the concept of n-dimensional normal dis-
tribution. An n-dimensional random vector X is elliptical distribution if and
only if it can be written as:
X = µ + RAU (2.15)
where U is a k-dimensional random variable (k ≤ n) which has set of value is
unit sphere in R
k
and has evenly distributed on it. A is n ×k constant matrix.
R is a random variable to receive non-negative value independent of U, and µ
is a vector(equal to the expected value of X).
We can chose R and A such that

= A.A
T
is the covariance matrix of X.
2.4.1.2 The copula associated to elliptical distribution

.
The copula associated to elliptical distribution was much used in financial.
The most commonly used elliptical distributions are the multivariate normal
and Student-t distributions.
The key advantage of elliptical copula is that one can specify different lev-
els of correlation between the marginals and the key disadvantages are that
elliptical copulas do not have closed form expressions and are restricted to
have radial symmetry. For elliptical copulas the relationship between the lin-
ear correlation coefficient ρ and Kendall’s tau τ is given by:
ρ(X, Y) = sin (
π
2
τ).
Normal copula
For every (u,v) in DomC, the normal copula is an elliptical copula given by:
C
ρ
(u, v) =

Θ
−1
(u)
−∞

Θ
−1
(v)
−∞
1
2π(1 −ρ

2
)
1
2
exp{−
x
2
−2ρxy + y
2
2(1 − ρ
2
)
}dxdy. (2.16)
Where Θ
−1
is the inverse of the univariate standard Nor mal distribution func-
tion and ρ the linear correlation coefficient, is the copula parameter.
17
2.4. Introduction some copula functions
The relationship between Kendall’s tau τ and the Normal copula parame-
ter ρ is given by:
ρ(X, Y) = sin (
π
2
τ).
Then in the multivariate case we have:
C
ρ
(u
1

, , u
n
, , u
N
) = Θ
ρ

−1
(u
1
), , Θ
−1
(u
n
), , Θ
−1
(u
N
)). (2.17)
Where ρ is symmetric matrix, positive definite with diagonal diagρ = 1 and
Θ is the univariate standard Normal distribution function with correlation
matrix ρ.
The T-copula
For every (u,v) in DomC, the Student-t copula is an elliptical copula defined
as:
C
ρ,ν
(u, v) =

t

−1
ν
(u)
−∞

t
−1
ν
(v)
−∞
1
2π(1 −ρ
2
)
1
2
{1 +
x
2
−2ρxy + y
2
ν(1 −ρ
2
)
}

ν+2
2
dxdy. (2.18)
where ν (the number of degrees of freedom) and ρ (linear correlation coeffi-

cient) are the parameters of the copula.
When the number of degrees of freedom ν is large (around 30 or so), the
copula converges to the Normal copula just as the Student distribution con-
verges to the Normal. But for a limited number of degrees of freedom the
behaviour of the copulas is different: the t-copula has more points in the tails
than the Gaussian one and a star like shape. A Student-t copula with n = 1 is
sometimes called a Cauchy copula.
As in the normal case (and also for all other elliptical copulas), the rela-
tionship between Kendall’s tau τ and the T copula parameter ρ is given by:
ρ(X, Y) = sin (
π
2
τ)
Then in the multivariate case we have:
C
ρ,ν
(u
1
, , u
n
, , u
N
) = T
ρ,ν
(t
−1
ν
(u
1
), , t

−1
ν
(u
n
), , t
−1
ν
(u
N
)), (2.19)
where ρ is symmetric matrix, positive definite with diagonal one and T
ρ,ν
is
standard function of multivariate student t-distribution with degree of free-
dom ν and correlation matrix ρ.
18

×