VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Tran Anh Chinh
DOMAINS OF PARTIAL
ATTRACTION OF INFINITELY
DIVISIBLE PROBABILITY MEASURES
Undergraduate Thesis
Advanced Undergraduate Program in Mathematics
Hanoi - 2012
VIETNAM NATIONAL UNIVERSITY
UNIVERSITY OF SCIENCE
FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS
Tran Anh Chinh
DOMAINS OF PARTIAL
ATTRACTION OF INFINITELY
DIVISIBLE PROBABILITY MEASURES
Undergraduate Thesis
Advanced Undergraduate Program in Mathematics
Thesis advisor: Assoc.Prof.Dr. Ho Dang Phuc
Hanoi - 2012
Acknowledgements
I want to express my sincere gratitude to my thesis advisor Ass. Prof. Dr. Ho
Dang Phuc, who has introduced me to the field of Probability. I am especially grateful
for your continual availability for discussions, your patience and ability of making
abstract mathematics so easy to be perceived. You are such a wonderful and great
thesis advisor.
Great deals appreciated go to the contribution of my faculty of Mathematics -
Mechanics - Informatics in my five years education process. That provide me knowledge
in mathematics.
Last but not least, the thanks are directed to my family, especially to my parents
for their instant support and encouragement across all the years of my study in the
university.
Tran Anh Chinh
Ha Noi, December, 2012
1
Introduction
In this thesis we represent results on limit theorem in probability theory relative to
infinitely distributions, stable and semi-stable distributions.
The first chapter introduce basis concepts of distribution functions, characteristic
functions, and the relationship between them. Beside the chapter represent some basic
knowlege on convergence of probability measures as the main tool used in the next
chapter.
The second chapter, we review some important knownlege on infinitely divisible
distribution functions and its two special cases of stable distributions and semi-stable
distributions. In additions, the canonical representations of those three kinds of prob-
ability distributions are given.
In the third chapter, we study properties of domains of partial attraction, domains
of semi-attraction, and domains of attraction for probability measures. Some relations
between their domains and stability, semi-stability and infinite divisibility of probabil-
ity measures are discussed.
2
Contents
Acknowledgements 1
Introduction 2
1 Basic concepts 4
1.1 Convergence of distribution function . . . . . . . . . . . . . . . . . . . 4
1.2 Characteristic function . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Infinitely divisible distributions, stable distributions and semi-stable
distributions 14
2.1 Infinitely divisible distribution . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Stable and semistable distribution . . . . . . . . . . . . . . . . . . . . . 15
2.3 The canonical representation . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Canonical representation of infinitely divisible characteristic func-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Canonical representation of stable characteristic functions . . . 23
2.3.3 Canonical representation of semi-stable characteristic functions . 30
3 Domains of partial attraction, domains of semi-attraction, and do-
mains of attraction 34
3.1 Domains of partial attraction and universal distribution. . . . . . . . . 34
3.2 Domains of attraction and domains of semi-attraction. . . . . . . . . . 44
Conclusion 61
3
Chapter 1
Basic concepts
1.1 Convergence of distribution function
Let DF be classes of distribution functions (d.f) on R, that means F ∈ DF if
a) 0 ≤ F(x) ≤ 1, ∀x ∈ R;
b) F (x) is nondcreasing on R;
c) F (x) is left continous for all x ∈ R;
d) lim
x→−∞
F (x) = 0, lim
x→+∞
F (x) = 1.
then F (x) is distribution function (F (x) ∈ DF).
Example 1.1.1. Let x
0
on R.
F
1
(x) = 0, with x ≤ x
0
, F
1
(x) = 1, with x > x
0
,
Then F
1
(x) is distribution function(∈ DF).
Example 1.1.2. Let F
2
(x) satisfy:
F
2
(x) =
0, x ≤ −1
1/2, −1 < x ≤ 1
1, x > 1
Example 1.1.3. Let F
3
(x) satisfy
F
3
(x) =
1
2π
x
−∞
exp
−
t
2
2
dt
4
Let (Ω, F, P) be a probability space. Let ξ be a random variable (r.v) on R, that
means ξ : Ω → R, and ξ be measurable. Then it is clear that the function
F
ξ
(x) = P{ω : ξ(ω) < x}
be a distribution function. F
ξ
is called the d.f of random variable ξ.
We can see that F
1
is distribution function of the improper random variable taken
value equal x
0
with probability 1, F
2
is the d.f of the Bernoulli taken value -1 and +1,
each with probability 1/2; F
3
is the distribution function of standard normal random
variable.
Definition 1.1.1. (Weak convergence)
Let {F
n
} be a sequence of d.f’s; F
n
, F ∈ DF. {F
n
} is said weakly converges to distri-
bution function F if
F
n
(x) → F (x), n → ∞, ∀x ∈ C(F),
where C(F ) is set of continuity points of F , and we denote: F
n
⇒ F .
In the next theorem, we can see that the weakly convergence of distribution func-
tions is equivalent to the convergence of integral sequences
∞
−∞
g(x)dF
n
(x), for all
bounded and continuous function g on R.
Theorem 1.1.1. If F
n
, F ∈ DF, then
F
n
⇒ F ⇔
∞
−∞
gdF
n
→
∞
−∞
gdF
for all continuous and bounded function g on R.
Proof. See in [2] page 197 to 202.
Moreover, let (P
n
) and P are probability measures on (R, B
R
). For B
R
is the Borel
σ-field of R and elements of B
R
are called Borel sets. And of course B
R
satisfies the
following condition:
(1) R ∈ B
R
, ∅ ∈ B
R
;
(2) A ∈ B
R
implies that A
c
∈ B
R
where A
c
stands for the complement of A;
(3) A
1
, A
2
, ∈ B
R
implies that
∞
i=1
A
i
∈ B
R
,
∞
i=1
A
i
∈ B
R
.
Proposition 1.1.1. Let P
n
, n ≥ 1 and P be probability measures. Then the following
statements are equivalent:
5
(a) P
n
⇒ P;
(b) limP
n
(A) ≤ P(A) for every closed set A;
(c) limP
n
(A) ≥ P(A) for every open set A;
(d) lim P
n
(A) = P(A) for every Borel set A whose boundary has P-measure 0.
Proof. See [11] page 172-174.
Definition 1.1.2. Let P denote the family of all probability measures on R. A family of
probability measures M ⊂ P is called relative compact if for every sequence (P
n
) ⊂ M
there exists a subsequence (P
n
k
) weakly convergent.
Definition 1.1.3. A family of probability measure {P
α
, α ∈ M} ⊂ P is called tight if
for every > 0 there exists a compact set K ⊂ R such that
sup
α
P
α
(R −K) <
A family of distribution function {F
α
, α ∈ M} on R
d
is tight if family of corresponding
probability function is tight.
Proposition 1.1.2. ( Prokhorov theorem )
A family of probability measures on R is relative compact if and only if it is tight.
Proof. See [11] page 179-182.
Proposition 1.1.3. Suppose (i) that A is the subclass of B
R
, A is closed under the
formation of infinite intersections (π-system) and (ii) for every x in B
R
and positive
, there is in A an A for which x ∈ A
0
⊂ A ⊂ B(x, ). If P
n
(A) → P(A) for every A
in A, then P
n
⇒ P.
Proof. See [6] page 17.
Proposition 1.1.4. Let S ⊂ R be a dense subset of R in S. Then the set of all
measures whose supports are finite subsets of S is dense in P.
Proof. See [5] page 44-45.
6
1.2 Characteristic function
Definition 1.2.1. Assume that F is a distribution function (F ∈ P). Then the
function f(t), t ∈ R, determined by
f(t) =
∞
−∞
e
itx
dF (x) =
∞
−∞
cos txdF(x) + i
∞
−∞
sin txdF(x)
is called characteristic function (c.f) of F .
If F is distribution function of random variable ξ, then the corresponding charac-
teristic function is called the characteristic function of random variable ξ. In that case,
we see that
f(t) = Ee
itξ
.
Proposition 1.2.1.
a) f(0) = 1 and |f(1)| ≤ 1, ∀t ∈ R;
b) f(−t) =
f(t);
c) f(t) is uniformly continuous on R.
d) For all real numbers a and b, we have
f
aξ+b
(t) = e
ibt
f
ξ
(at).
e) If ξ and η is independent random variable, then
f
ξ+η
(t) = f
ξ
(t).f
η
(t).
Proof. a)+b)+c)
Part a) and b) follows immediately from definition of a characteristic function
(f(t) = Ee
itξ
). It remains to prove the uniform continuity of the function. For this
purpose we introduce an inequality which will be useful also in what follows,namely,if
F (A) −F(−A) ≥ 1 − ,
(Let A be finite number) then
|f(t
) −f(t
)| ≤ A|t
− t
| + 2. (1.1)
To prove this we note that for real z
and z
the following inequalities holds:
|e
iz
− e
iz
| ≤ |z
− z
|, since |
d
dz
|e
iz
= 1,
7
|e
iz
− e
iz
| ≤ 2.
Therefore
|f(t
) −f(t
)| ≤
|x|≤A
+
|x|>A
|e
iz
x
− e
iz
x
|dF (x)
≤
|x|≤A
|it
x −it
x|dF (x) + 2[F (−A) + 1 − F (A)]
≤ A|t
− t
| + 2
d) Infact,
f
aξ+b
(t) = Ee
it(aξ+b)
= e
ibt
Ee
itaξ
= e
itb
f
ξ
(at).
Furthermore, we have
F
aξ+b
(x) = P (aξ + b < x) = P (ξ <
x −b
a
) = F
ξ
(
x −b
a
).
e) It is obviously, together with ξ and η, the random variables e
itξ
and e
itη
are also
independent. Therefore
Ee
it(ξ+η)
= E(e
itξ
.e
itη
) = Ee
itξ
.Ee
itη
or
f
ξ+η
= f
ξ
(t).f
η
(t).
Example 1.2.1. The random variable is distributed according to the normal distri-
bution with expectation a and variance σ
2
. The characteristic function of ξ is
ϕ(t) =
e
itx
1
σ
√
2π
e
−
(x−a)
2
2σ
2
dx.
Substituting
z =
x −a
σ
− itσ
we reduce ϕ(t) to the form
ϕ(t) = e
−
t
2
σ
2
2
+iat
1
√
2π
+∞−itσ
−∞−itσ
e
−
z
2
2
dz;
Now, it is known that for every real α,
+∞−itα
−∞−itα
e
−
z
2
2
dz =
√
2π
consequently
ϕ(t) = e
ita−
1
2
t
2
σ
2
.
8
Example 1.2.2. Let X has Poisson distribution with λ > 0, f
X
is characteristic
function of X,
P (X = k) =
λe
−λ
k!
, k = 0, 1,
so we have
f
X
=
∞
k=0
e
itk
λ
k
e
−λ
k!
= e
−λ
∞
k=0
(λe
it
)
k
k!
= e
−λ
.e
λe
it
= e
λ(e
it
−1)
In the next theorems, we can see two main properties of relationship between char-
acteristic function (c.f) and distribution function, that are: unique determined, and
corresponding of continuity between c.f and d.f.
Theorem 1.2.1. Let F be a distribution function, f is its characteristic function. If
x
1
, x
2
is two continuity points of F (x), then
F (x
2
) −F(x
1
) =
1
2π
lim
c→∞
c
−c
e
−itx
1
− e
−itx
2
it
f(t)dt.
Proof. For the sake of definiteness let x
1
< x
2
. Set
I
c
=
1
2π
c
−c
e
−itx
1
− e
−itx
2
it
f(t)dt.
Substituting here for f(t) its expression in term of F(x) and changing the order of
integration, we easily find
I
c
=
1
2π
c
−c
e
it(z−x
1
)
− e
it(z−x
2
)
it
dt
dF (z)
=
1
π
c
0
sin t(z − x
1
)
t
−
sin t(z − x
2
)
t
dtdF (z).
Now, for every α and c
1
π
c
o
sin αt
t
dt
=
1
π
αc
0
sin s
s
ds
< 1, (1.2)
and for c → ∞
1
π
c
0
sin αt
t
dt →
1
2
, if α > 0,
−
1
2
, if α < 0,
(1.3)
Also, this approach to the limit is uniform with respect to α in every domain α > δ > 0
(respectively α < −δ < 0).
Now choose δ so small that x
1
+ δ < x
2
− δ and write I
c
as sum of five integrals
I
c
=
x
1
−δ
−∞
ψ(c, z, x
1
, x
2
)dF (z) +
x
1
+δ
x
1
−δ
ψ(c, z, x
1
, x
2
)dF (z)+
9
x
2
−δ
x
1
+δ
ψ(c, z, x
1
, x
2
)dF (z) +
x
2
+δ
x
2
−δ
ψ(c, z, x
1
, x
2
)dF (z)+
+
∞
x
2
+δ
ψ(c, z, x
1
, x
2
)dF (z),
where
ψ(c, z, x
1
, x
2
) =
1
π
c
0
sin t(z − x
1
)
t
−
sin t(z − x
2
)
t
dt.
From (1.3) it follows that as c → ∞
ψ(c, z, x
1
, x
2
) → 0 for z < x
1
− δ and z > x
2
+ δ
and
ψ(c, z, x
1
, x
2
) → 1 for x
1
+ δ < z < x
2
− δ,
both limits being uniform with respect to z. In the interval (x
1
− δ, x
1
+ δ) and
(x
2
− δ, x
2
+ δ) we know that
|ψ(c, z, x
1
, x
2
)| ≤ 2.
From the relations obtained above we conclude that for every δ > 0
lim
c→∞
I
c
= F (x
2
− δ) −F (x
1
+ δ) + R(δ, x
1
, x
2
), (1.4)
where
|R(δ, x
1
, x
2
)| ≤ 2
F (x
1
+ δ) −F (x
1
− δ) + F (x
2
+ δ) −F (x
2
− δ)
.
The left side of (1.4) does not depend on δ, and the limit on the right side as δ tends
to zero is F (x
2
) −F(x
1
), by the choice of the points x
1
and x
2
.
Theorem 1.2.2. A distribution function is uniquely determined by its characteristic
function
Proof. From above theorem it follows immediately that at every continuity points x of
the function F (x) the following formula applies:
F (x) =
1
2π
lim
y→−∞
lim
c→∞
0
−c
e
−ity
− e
−itx
it
f(t)dt,
where the limit in y is taken over the set of points y which are continuity points of
F (y).
10
Let’s study the continuity of correspondence d.f anh c.f.
Theorem 1.2.3. Let F
n
, F ∈ P, f
n
(t), f(t) are their characteristic function; F
n
⇒ F ,
then
f
n
(t) → f(t)
as n → ∞ uniformly in every bounded interval |t| ≤ T .
Theorem 1.2.4. If f
n
(t) id the characteristic function of the distribution F
n
and f
n
(t)
converges as n → ∞ for all t to a continuous function f(t) then the distribution F
n
converges weakly to a distribution function F with the characteristic function f(t)
To prove two above theorem, we have to known proposition:
Proposition 1.2.2. In the order that the set S of distributions be conditionally com-
pact ( which mean for every sequences of family S there exists subsequences is weakly
convergence ) in metric space R
1
, it is necessary and sufficient that the conditions
F (x) → 0, f or x → −∞,
F (x) → 1, f or x → +∞,
be satisfied uniformly in S.
Proof. (of Theorem 1.2.3 )
Let F
n
⇒ F. From definition of weak convergence it follows that f
n
(t) → f(t) for
every t. Since F
n
⇒ F, the F
n
form a conditionally compact set in the sense of weak
convergence. From above proposition and (1.1) we conclude that the corresponding
characteristic function f
n
(t) are equi-continuous. Moreover, the f
n
(t) are uniformly-
bounded. Therefore the convergence f
n
(t) → f(t) as n → ∞ must be uniform in every
finite interval.
Before proceeding to the proof Theorem 1.2.4, let us introduce an inequality.
Let τ > 0, X > 0, and
1
τX
< 1. Then
P {−X; +X} ≥
1
2π
+τ
−τ
f(t)dt
−
1
τX
1 −
1
τX
. (1.5)
In particular,
P {−
2
τ
;
2
τ
} ≥ 2
1
2τ
+τ
−τ
f(t)dt
− 1. (1.6)
11
Proof. (of Theorem 1.2.4 )
1
2τ
+τ
−τ
f(t)dt
=
1
2τ
+τ
−τ
e
itx
dP
dt
=
+τ
−τ
1
2
e
itx
dt
dP
≤
|∞|≤X
+
|∞|>X
1
2τ
sin τxdP
≤ P {−X; +X}+
1
τX
(1 −P{−X; +X}) (1.7)
(in the last estimates we euse the inequalities
sin τx
τx
≤ 1,
sin τx
τx
≤
1
τx
.
It is easy to see that (1.7) is equivalents to (1.5).
Now let the conditions of this theorem satisfied. Then for every > 0 we can find
a τ > 0 such that
1
2τ
+τ
−τ
f(t)dt −1
<
2
.
Consequently,
1
2τ
+τ
−τ
f
n
(t)dt −1
≤
1
2τ
+τ
−τ
(f
n
(t) −f(t))dt
+
1
2τ
+τ
−τ
f(t)dt −1
≤
2
+
1
2τ
+τ
−τ
|f
n
(t) −f(t)|dt.
But f
n
(t) → f(t) and |f
n
(t) −f(t)| ≤ 2. For fixed > 0 and τ > 0 and for n ≥ n(, τ),
1
2τ
+τ
−τ
f
n
(t)dt −1
≤
2
+
2
= .
Noting that P
n
(−X, +X) does not decrease with increasing X, we conclude from (1.6)
and the above proposition that the set of distribution {P
n
} is conditionally compact.
For every convergent subsequence {P
n
k
} the function f
n
k
converges to some continuous
function g(t), by theorem 1.2.3.
Then g(t) must coincide with f(t). Consequently, the conditionally compact set
{P
n
} has a unique limit point, which means that P
n
⇒ P ,as n → ∞. Or in other
words,
F
n
⇒ F (n → ∞).
12
In the next, we study relationship of convolution of distribution functions and
product of characteristic functions.
Definition 1.2.2. The function F (x), x ∈ R is called convolution determined of dis-
tribution functions F
1
and F
2
(∈ P),denote F = F
1
∗ F
2
if:
F (x) =
∞
−∞
F
1
(x −y)dF
2
(y), ∀x ∈ R.
It is easy to see that F
1
, F
2
∈ P then F ∈ P,too!
Proposition 1.2.3. F = F
1
∗ F
2
⇔ ϕ = ϕ
1
.ϕ
2
Proof. See in [2] page 228-229.
The above proposition gives the followings consequences.
Corollary 1.2.1. The convolution has following propertys:
F
1
∗ F
2
= F
2
∗ F
1
,
(F
1
∗ F
2
) ∗F
3
= F
1
∗ (F
2
∗ F
3
).
Corollary 1.2.2. Assmue that ξ and η are 2 indepedent random variables with distri-
bution functions F
ξ
and F
η
.Then the distribution of ξ + η is
F
ξ+η
= F
ξ
∗ F
η
.
Corollary 1.2.3. If distribution function F
1
has density function p
1
= F
1
then convo-
lution F = F
1
∗ F
2
has dendity function
d
dx
F (x) =
∞
−∞
p
1
(x −y)dF
2
(y).
Corollary 1.2.4. Assume that ξ and η are independent random variables with distri-
bution function F
ξ
and F
η
,in addition F
ξ
has density function p
ξ
= F
ξ
.The distribution
F
ξ+η
of sum ξ + η has density function
d
dx
F
ξ+η
(x) =
∞
−∞
p
ξ
(x −y)dF
η
(y)
13
Chapter 2
Infinitely divisible distributions,
stable distributions and semi-stable
distributions
Let us denote if ξ and η are random variables with distribution functions F
ξ
and
F
η
, then T
a
F
ξ
denotes distribution function of random variable a.ξ, and F
ξ
∗ F
η
is the
distribution function of random variable ξ + η. Furthermore, δ(b) denote the concer-
trated at the point b.
Moreover, we should know two definitions: Assume that for every n = 1, 2, , ξ
nk
,
k = 1, 2, , k
n
are random variables, k
n
→ ∞ when n → ∞. The variables ξ
nk
are
called infinitesimal if
sup
1≤k≤k
n
P {|ξ
nk
| ≥ } → 0
as n → ∞ for every > 0.
The variables ξ
nk
are called asymptotically constant if it is possible to find constants
b
nk
so that for every > 0
sup
1≤k≤k
n
P {|ξ
nk
− b
nk
| ≥ } → 0
as n → ∞.
14
2.1 Infinitely divisible distribution
A distribution function F and its characteristic function f are called infinitely di-
visible if for each n > 0,n ∈ N, there exists characteristic function f
n
(t) such that
f(t) = (f
n
(t))
n
. (2.1)
It means for n > 0,n ∈ N, there exists distribution function F
n
(x) such that
F (x) = F
∗n
n
(x), (2.2)
where F
∗n
n
be the convolution power determined reductively by F
n
= F
n
∗ F
n
∗ ∗F
n
(n times)
Example 2.1.1. We can see that: the normal distribution N(a, σ
2
) is infinitely divis-
ible. Indeed, for c.f of normal d.f, we have
exp[ita −
σ
2
t
2
] = [exp(
ita
n
−
σ
2
t
2n
)]
n
.
2.2 Stable and semistable distribution
Stable d.f is defined as the following:
Definition 2.2.1. (Stable distribution)
A distribution function F and its characteristic function f are called to be stable if
∀a
1
> 0, a
2
> 0, ∃a, b ∈ R, a > 0 such that
f(a
1
t)f(a
2
) = e
ibt
f(at). (2.3)
or equivalently, ∀a
1
> 0, a
2
> 0, ∀b
1
and b
2
∈ R, there exists a, b ∈ R, a > 0 such that
F (a
1
x + b
1
) ∗F(a
2
x + b
2
) = F (ax + b). (2.4)
Theorem 2.2.1. Let {ξ
n
} be a sequence of independent random variables having the
same distribution function F (x); the distribution function of the sum S
n
=
n
k=1
ξ
k
will be F
∗n
(x). The distribution function of the normed sum S
n
/a
n
− b
n
,where a
n
> 0
and b
n
are real numbers
T
a
n
F
∗n
∗ δ(b
n
). (2.5)
The distribution function Φ(x) is stable if and only if it is the limit distribution function
of a sequence of the form (2.5).
15
Proof. See in the next chapter !
Example 2.2.1. Let S
n
= X
1
+X
2
+ +X
n
with {X
n
} is a sequence random variables,
which are independent and have same distribution function, have mean value EX
i
= b,
DX
i
= σ
2
> 0, and b
n
= nb, a
n
= σ
√
n. Then distribution of the sum
S
n
−b
n
a
n
is weakly
convergent to standard normal distribution. So we can see that normal distributions
is belong to class of stable distributions.
Example 2.2.2. Let S
n
be same as previous example. Let X
1
has Cauchy density
function
g(x) =
a
π(x
2
+ a
2
)
, a > 0
and a
n
= n, b
n
= 0, so S
n
/a
n
= S
n
/n has characteristic function
f
S
n
/n
(t) =
f
X
1
(t/n)
n
=
exp{−
a
n
|t|}
n
= e
−a|t|
It mean that S
n
/n also has Cauchy distribution, moreover it is stable.
In the following, we give a concept of semi-stability as a broader class than the class
of stable distributions.
Definition 2.2.2. (Semi-stable distribution)
Assume that there exists {F (x), a
k
, b
k
, n
k
} with numerical sequences {a
k
} and {b
k
} ,
and an increasing sequences of natural numbers {n
k
} such that
T
a
k
F
∗n
k
∗ δ(b
k
) ⇒ Φ(x) (2.6)
We assume in addition that:
lim
k→∞
(n
k
/n
k+1
) = r > 0, (2.7)
then the distribution function Φ(x) will be called a semi-stable distribution function,
or exactly r-semistable distribution function.
2.3 The canonical representation
2.3.1 Canonical representation of infinitely divisible charac-
teristic functions
The two next lemmas will be used in the study of canonical representation of in-
finitely divisible functions:
16
Lemma 2.3.1. Let F, G be distribution function on R, h
1
: R → R, h
1
≥ 0. Suppose
that
+∞
−∞
h
1
(x)dF (x) < ∞
and
G(x) =
x
−∞
h
1
(t)dF (t)
then
A
h
2
dG(x) =
A
h
1
(x)h
2
(x)dF (x)
for all Borel subset A ⊂ R and measurable function h
2
: R → R.
Lemma 2.3.2. With the condition same as in above lemma, suppose that h
1
(x) > 0
for all x ∈ R. We have
A
h
2
(x)dF (x) =
A
h
2
(x)
h
1
(x)
dG(x).
Both integrals exist or neither exist.
Moreover, The random variable X with parameter λ > 0 has probability
P [X = k] =
λ
k
e
−λ
k!
, k = 0, 1,
is called Poisson distribution.
Before studying canonical representation of infinitely divisible functions, we should
know the following proposition.
Proposition 2.3.1. The totality of infinitely divisible distribution laws coincides with
the totality of laws which are composed of a finite number of Poisson laws and of limits
of these laws in the sense of weak convergence. We can confirm that f(t) ia an infinitely
divisible characteristic function.
Proof. Necessary. The composition of a finite number of Poisson distribution and the
limit distribution is infinitely divisible, because:
(i) the d.f of the sum of a finite number of independent infinitely divisible random
variables is infinitely divisible. Let ξ, ν have c.f f(t) and g(t), and X = ξ +ν have
c.f h(t). We have there exists f
n
, g
n
such that h(t) = f(t)g(t) = {f
n
}
n
.{g
n
}
n
=
{f
n
g
n
}
n
for every n
(ii) A distribution function which is the limit, in the sense of weak converges, of
infinitely divisible functions is infinitely divisible. Indeed, let F
(k)
(x) be an in-
finitely divisible d.f and F
(k)
(x) ⇒ F (x) as k → ∞. IF f
(k)
(t) is c.f of F
(k)
(x),
17
then f
(k)
(t) ⇒ f(t). Moreover, f
(k)
n
⇒ {f
(k)
(t)}
1/n
. For every n, k → ∞ :
f
(k)
n
⇒ f
n
. From above item, f
n
is a c.f. So for every natural number n the
f(t) = {f
n
}
n
holds.
Sufficiently. Let f(t) be the infinitely divisible c.f. By hypothesis
f
n
(t) = f
1/n
(t)
is characteristic function; thus
f
n
(t) =
e
itx
dF
n
(x),
where F
n
(x) is a d.f. We have
n(f
n
(t) −1) ⇒ log f(t) n → ∞
and therefore
e
n(f
n
(t)−1)
⇒ f(t), n → ∞. (2.8)
We represent the above integral as the limit m → ∞ of the Stieltjes sum:
m
k=1
e
itc
k
[F
n
(c
k
) −F
n
(c
k
− 1)] ⇒ f
n
(t). (2.9)
Put
a
k
= n[f
n
(c
k
) −F
n
(c
k−1
)].
Comparing (2.8) and (2.9) , it is easy to see that
e
m
k=1
c
k
(e
itc
k
−1)
⇒ f(t).
The following theorem gives a canonical representation of infinitely divisible ran-
dom variable.
Theorem 2.3.1. In order that the function f(t) be the characteristic function of
an infinitely divisible distribution, it is necessary and sufficient that its logarithm be
representable in the form
log f(t) = iγt +
e
itu
− 1 −
itu
1 + u
2
1 + u
2
u
2
dG(u), (2.10)
18
where γ is a real constant, G(u) is a nondecreasing function of bounded variation, and
the integrand at u = 0 is defined by the equation
e
itu
− 1 −
itu
1 + u
2
1 + u
2
u
2
u=0
= −
t
2
2
.
The representation of log f(t) by the formula (2.10) is unique.
Proof. Necessity. Suppose that F (x) be an infinitely divisible distribution and f(t) its
characteristic function. Then for every n > 0
f(t) = [f
n
(t)]
n
,
where f
n
(t) is a characteristic function. Since f(t) = 0,it is easy to see that
n[f
n
(t) −1] = n
(e
itx
− 1)dF
n
(x) ⇒ log f(t),
(The relation can be prove, by following way:
n(a
1/n
− 1) = n(e
1
n
log a
− 1) = n
1 +
1
n
log a + o(
1
n
) −1
→ log a.)
where F
n
(x) is distribution function corresponding to the characteristic function f
n
(t).
Put
G
n
(u) = n
u
−∞
x
2
1 + x
2
dF
n
(x)
and
I
n
(t) =
(e
itu
− 1)
1 + u
2
u
2
dG
n
(u). (2.11)
Then by lemma 2.3.1 the preceding relation may be written as
I
n
(t) ⇒ log f(t) (2.12)
and we conclude that
ReI
n
(t) =
(cos ut −1)
1 + u
2
u
2
dG
n
(u) ⇒
(cos ut −1)
1 + u
2
u
2
dG(u) = log |f(t)|.
We shall prove that G
n
(+∞) is bounded. For this purpose consider the expression
A
n
=
|u|≤1
dG
n
(u), B
n
=
|u|>1
dG
n
(u),
C
n
= A
n
+ B
n
=
dG
n
(u).
let 0 ≤ t ≤ 2. It is evident that for every > 0 and for sufficiently large n
−log |f(t)| + ≥
|u|≤1
(1 −cos tu)
1 + u
2
u
2
dG
n
(u)
19
and
−log |f(t)| + ≥
|u|>1
(1 −cos tu)
1 + u
2
u
2
dG
n
(u)
For |u| ≤ 1
1 −cos u
u
2
>
1
3
,
hence the first inequality above gives
− log |f(1)| + >
1
3
A
n
. (2.13)
Taking,in the interval 0 ≤ t ≤ 2, the mean of the function on both sides of the
second inequality above, we obtain
−
1
2
2
0
log |f(t)|dt + ≥
|u|>1
(1 −
sin 2u
2u
)dG
n
(u) ≥
1
2
B
n
. (2.14)
Since the quantities log |f(1)| and
1
2
2
0
log |f(t)|dt are finite, it follows from (2.13) and
(2.14) that G
n
(+∞) is bounded. We shall now prove that
|u|>T
dG
n
(u) → 0, T → ∞
uniformly with respect to n. In fact, for every > 0 and for sufficiently large n
−log |f(t)| + ≥
|u|≥T
(1 −cos tu)dG
n
(u).
Taking the means in the interval 0 ≤ t ≤
2
T
(T ≥ 1) of both sides of the inequality, we
obtain
−
T
2
2/T
0
log |f(t)|dt + ≥
|u|≥T
1 −
T sin
2u
T
2u
dG
n
(u).
But for |u| ≥ T
1 −
T sin
2u
T
2u
≥
1
2
,
and for T ≥ T
0
T
2
2/T
0
log |f(t)|dt
≤ max
0≤t≤
2
T
|log |f(t)|| < .
Therefore for T ≥ T
0
|u|≥T
dG
n
(u) ≤ 4.
Now on the basis of proposition 1.2.2 we can choose a subsequence from G
n
(u) such
that
G
n
k
(u) ⇒ G(u)
20
where G(u) is a nondecreasing function of bounded variation.
Put
γ
n
k
=
dG
n
k
(u)
u
= n
k
x
1 + x
2
dF
n
k
(x);
then it is evident from (2.12) that
I
n
k
(t) =
e
itu
− 1 −
itu
1 + u
2
1 + u
2
u
2
dG
n
k
(u) + itγ
n
k
The integral on the right side of this equation, as k → ∞, converges to
e
itu
− 1 −
itu
1 + u
2
1 + u
2
u
2
dG(u).
From (2.12) we conclude that γ
n
k
must converge to some number γ as k → ∞.
Thus the first part of theorem is proved.
Sufficiently. Suppose that (2.10) holds. We have the integral of right hand side of
(2.10) is the limit of the Stieltjes sums (all c
k
are taken to be different from zero):
m
k=1
e
itc
k
− 1 −
itc
k
1 + c
2
k
1 + c
2
k
c
2
k
[G(c
k
) −G(c
k−1
)]
⇒
e
itu
− 1 −
itu
1 + u
2
1 + u
2
u
2
dG(u).
Each term on the left is the logarithm of the characteristic function of a Poisson law.
( See proposition 2.3.1 )
It remain to prove the uniqueness of the representation of the logarithm of an
infinitely divisible characteristic function by (2.10). From (2.10) it is easily deduced
that
−v(t) =
t+1
t−1
log f(z)dz − 2 log f(t) = −2
e
itu
1 −
sin u
u
1 + u
2
u
2
dG(u).
Putting
V (u) = 2
u
−∞
1 −
sin v
v
1 + v
2
v
2
dG(v),
we find that
v(t) =
e
itu
dV (u).
The function V (u) is nondecreasing, hence it is uniquely determined by its charac-
teristic function v(t).
Since for all v
1 −
sin v
v
1 + v
2
v
2
> 0,
according to lemma 2.3.2 the function G(v) is unique determined by the function V (u).
The theorem is proved.
21
Define the function M(u) and N(u) and the constant σ
2
by setting
M(u) =
u
−∞
1 + z
2
z
2
dG(z) for u < 0,
N(u) = −
∞
u
1 + z
2
z
2
dG(z) for u > 0,
σ
2
= G(+0) −G(−0). (2.15)
The function M(u) and N(u)
(i) are respectively nondecreasing in the intervals
(−∞, 0), (0, +∞);
(ii) are continuous at those and only those points at which G(u) is continuous;
(iii) satisfying the relations
M(−∞) = N(+∞) = 0
and
0
−
u
2
dM(u) +
0
u
2
dN(u) < +∞
for every finite > 0.
Conversely, any two functions M(u) and N(u) satisfying the conditions (2.10) and
(2.12) and any constant σ > 0 determine by (2.15) the characteristic function of some
infinitely divisible law. In terms of M(u) and N(u) we can write (2.10) in the following
form
log f(t) = iγt −
σ
2
2
t
2
+
0
−∞
e
iut
− 1 −
iut
1 + u
2
dM(u)
+
∞
0
e
iut
− 1 −
iut
1 + u
2
dN(u). (2.16)
We shall call (2.16) Levy’s formula.
We say that F(x) belong to class L, if it is possible to find a sequence of independent
random variable ξ
1
, ξ
1
, , ξ
n
such that for suitably chosen constants a
n
> 0 and b
n
the
distribution functions of the sum
1
a
n
n
k=1
ξ
k
−b
n
converges to F(x), and the variables
ξ
nk
= ξ
k
/a
n
, (1 ≤ k ≤ n) are asymptotically constant. The distribution of class L is
infinitely divisible, hence it can be represented by Levy’s formula. In addition,we have
proposition:
22
Proposition 2.3.2. In order that the distribution function F (x) belong to the class
L it is necessary and sufficient that the function M (u) and N(u) in the formula (2.16)
have right and left derivatives for every value u and that the function
uM
(u) (u < 0),
uN
(u) (u > 0)
be nonincreasing [here M
(u) and N
(u) denote either the right or the left derivative,
possibly different ones at different points].
2.3.2 Canonical representation of stable characteristic func-
tions
Theorem 2.3.2. In order that the distribution function F(x) be stable, it is necessary
and sufficient that the logarithm of its characteristic function be represented by the
formula
log f(t) = iγt −c|t|
α
{1 + iβ
t
|t|
ω(t, α)}, (2.17)
where α, β, γ, c are constants (γ is real number,−1 ≤ β ≤ 1, 0 ≤ α ≤ 2, c ≥ 0) and
ω(t, α) =
tan
π
2
α, if α = 1,
2
π
log |t|, if α = 1.
The function M(u) and N(u) and the constant σ in Levy’s formula are, correspond-
ingly:
(i) If 0 < α < 2, then M(u) =
c
1
|u|
α
, N(u) = −
c
2
u
α
, σ = 0
(ii) If α = 2, then M(u) = N(u) = 0 and σ ≥ 0.
where c
1
≥ 0, c
2
≥ 0, c
1
+ c
2
> 0.
Proof. In term of characteristic functions, equation (2.4) can be written as
log f(
t
a
) = log f(
t
a
1
) + log f(
1
a
2
) + iβt, (2.18)
where β = b−b
1
−b
2
. We recall that F (x) is an infinitely divisible d.f and, consequently
log f(t) = iγt −
σ
2
t
2
2
+
0
−∞
e
itu
− 1 −
itu
1 + u
2
dM(u)
+
∞
0
e
itu
− 1 −
itu
1 + u
2
dN(u).
23