Tải bản đầy đủ (.pdf) (13 trang)

DSpace at VNU: L (1) bounds for some martingale central limit theorems

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (196.77 KB, 13 trang )

Lithuanian Mathematical Journal, Vol. 54, No. 1, January, 2014, pp. 48–60

L1 bounds for some martingale central limit theorems
Le Van Dung a,1 , Ta Cong Son b,2 , and Nguyen Duy Tien b,1
a

Faculty of Mathematics, Da Nang University of Education, 459 Ton Duc Thang, Da Nang, Viet Nam
b
Faculty of Mathematics, Hanoi University of Science, 334 Nguyen Trai, Hanoi, Viet Nam
(e-mail: ; ; )

Received September 24, 2013; revised January 7, 2014

Abstract. The aim of this paper is to extend the results in [E. Bolthausen, Exact convergence rates in some martingale
central limit theorems, Ann. Probab., 10(3):672–688, 1982] and [J.C. Mourrat, On the rate of convergence in the martingale central limit theorem, Bernoulli, 19(2):633–645, 2013] to the L1 -distance between distributions of normalized
partial sums for martingale-difference sequences and the standard normal distribution.
MSC: 60F05, 60G42
Keywords: mean central limit theorems, rates of convergence, martingale

1

Introduction and statements of results

2
Let X1 , X2 , . . . , Xn be a sequence of real-valued random variables with mean
√ zero and finite variance σ . Put
S := X1 +X2 +· · ·+Xn . Denote by Fn the distribution functions of S/σ n, and let Φ be the standard normal
distribution function. The classical central limit theorem confirms that if X1 , X2 , . . . , Xn are independent and
identically distributed, then Fn (x) converges to Φ(x) as n → ∞ for all x ∈ R. In 1954, Agnew [1] showed that
the convergence also holds in Lp for p > 1/2. The convergence in the case of p = 1 is called the mean central
limit theorem. The rate of convergence in the mean central limit theorem was also studied by Esseen [7], who


showed that

Fn − Φ

1

= O n−1/2

as n → ∞.

Recently, Sunklodas [12, 13] has extended this result to independent nonidentically distributed random
variables and ϕ-mixing random variables by using the Bentkus approach [2].
Let X = (X1 , . . . , Xn ) be a square-integrable martingale-difference sequence of real-valued random
variables with respect to the σ -fields Fj = σ(X1 , . . . , Xj−1 ), j = 2, 3, . . . , n + 1; F1 = {∅, Ω}. Let
Mn denote the class of all such sequences of length n. If X ∈ Mn , we write σj2 = E(Xj2 | Fj−1 ),
σ 2j = E(Xj2 ), S = S(X) = nj=1 Xj , s2 = s2 (X) = nj=1 σ 2j , V 2 = V 2 (X) = nj=1 σj2 /s2 (X), and
1
2

The research of the author has been partially supported by the Viet Nam National Foundation for Science and Technology Development (NAFOSTED), grant No. 101.03-2012.17.
The research of the author has been partially supported by project TN-13-01.

0363-1672/14/5401-0048 c 2014 Springer Science+Business Media New York

48


L1 bounds for some martingale CLT

49


X p = max1 j n Xj p for 1 p ∞. We denote by N a standard normal random variable; the distribution function and the density function of N are denoted by Φ(x) and ϕ(x), respectively.
If X ∈ Mn , V (X) → 1 in probability, and some Lindeberg-type condition is satisfied, then
lim P

n→∞

S(X)
s(X)

x

= Φ(x)

for all x ∈ R.

For bounds of the convergence rate in this central limit theorem, the following results were shown by
Bolthausen [4].
Theorem 1. (See [4].) Let 0 < α β < ∞, 0 < γ < ∞. There exists a constant 0 < Cα,β,γ < ∞ such that,
for any n 2 and any X ∈ Mn satisfying σj2 = σ 2j a.s., α σ 2j β for 1 j n, and X 3 γ ,
sup P
x∈R

S
s

x − Φ(x)

Cα,β,γ n−1/4 .


Theorem 2. (See [4].) Let γ ∈ (0; +∞). There exists a constant 0 < Cγ < ∞, depending only on γ , such
that, for any n 2 and any X ∈ Mn satisfying X ∞ γ and V (X) = 1 a.s.,
sup P
x∈R

S
s

x − Φ(x)



n log n
.
s3

Relaxing the condition that V 2 = 1 a.s., Bolthausen [4] also showed the following result.
Corollary 1. (See [4].) Let γ ∈ (0; +∞). There exists a constant Cγ > 0 such that, for any n
X ∈ Mn satisfying X ∞ γ ,
sup P
x∈R

S
s

x − Φ(x)



n log n

+ min
s3

1/3
,
1

V2−1

V2−1

1/2


2 and any

.

Mourrat [11] generalized Corollary 1 and obtained the optimality of the result to any p ∈ [1; +∞).
Theorem 3. (See [11].) Let p ∈ [1; +∞) and γ ∈ (0; +∞). There exists a constant Cp,γ > 0 such that, for
any n 2 and any X ∈ Mn satisfying X ∞ γ ,
sup P
x∈R

S
s

x − Φ(x)

Cp,γ


n log n
+
s3

V2−1

p
p

+ s−2p

1/(2p+1)

.

The aim of this article is to extend these results to L1 -bounds in the mean central limit theorem for
martingale-difference sequences.
Theorem 4. Let 0 < α β < ∞, 0 < γ < ∞. If X 3 γ , σj2 = σ 2j a.s., and α
then there exists a constant C = C(α, β, γ) ∈ (0; ∞) such that
FS/s − Φ

Theorem 5. Let 0 < γ < ∞. If X
such that



β for 1

j


n,

Cn−1/4 .

1

γ and V 2 (X) = 1 a.s., then there exists a constant 0 < C < ∞
FS/s − Φ

1

C

We have the following corollary, similar to Corollary 1.
Lith. Math. J., 54(1):48–60, 2014.

σ 2j

γ 3 n log n
.
s3


50

L.V. Dung, T.C. Son, and N.D. Tien

Corollary 2. Let 0 < γ < ∞ and p > 1/2. If X
depending only on p, such that

FS/s − Φ

γ 3 n log n
+ min
s3

C

1

γ , then there exists a positive constant C = C(p),



V2−1

1/2
,


E V2−1

p 1/2p

.

The following corollary is an L1 -version of Theorem 3.
Corollary 3. Let 0 < γ < ∞ and p > 1/2. If X
depending only on p, such that
FS/s − Φ


1

γ , then there exists a positive constant C = C(p),



γ 3 n log n
p
+ E V 2 − 1 + s−2p
s3

C

1/2p

.

1/3

Note that the term V 2 − 1 1 appearing in Corollary 1 is replaced by the smaller term (E|V 2 − 1|p )1/2p
in Corollary 2, and the term ( V 2 − 1 pp + s−2p )1/(2p+1) appearing in Theorem 3 is replaced by the smaller
term (E|V 2 − 1|p + s−2p )1/2p in Corollary 3.

2

Auxiliary lemmas

For two random variables X and Y with distribution functions FX and GY , respectively, applying the
Kantorovich–Rubinstein theorem (see, e.g., [6, Thm. 11.8.2]), we have that



FX − G Y

1

FX (x) − GY (x) dx = sup E f (X) − E f (Y ) ,

=

f ∈Λ1

−∞

where Λ1 is the set of 1-Lipschitzian functions from R to R. For more details, we refer the reader to [8] and [5].
For functions f, g : R → R, their convolution f ∗ g is defined by


f ∗ g(x) =

f (x − y)g(y) dy.
−∞

We have the following lemmas.
p, q, r

Lemma 1. (See [3, p. 205].) If 1
have

∞, 1/p + 1/q = 1 + 1/r, f ∈ Lp (R), and g ∈ Lq (R), then we

f ∗g

f

r

p

g q.

Lemma 2. Let X and η be real random variables. Then, for any p > 1/2, we have
FX − Φ

1

FX+η − Φ

1

+ 2(2p + 1) E η 2p X

Proof. The conclusion is trivial in the case of E(η 2p | X)
γ < ∞. For any a > 0, we have that





FX − Φ


1

P(X

=
−∞

t − a) − Φ(t − a) dt

1/2p
.


= ∞. So, we assume that E(η 2p | X)

(2.1)


=


L1 bounds for some martingale CLT

51





P(X


t − a) − P(X + η

t) dt +

−∞

Φ(t) − Φ(t − a) dt

−∞

+ FX+η − Φ 1 .

(2.2)

First, we consider the first term on the right-hand side of (2.2). We have that
P(X + η

t) = E P(η

t − X | X)

E I(X

= P(X

t − a) − E I(X

t − a)P(η


t − X | X)

t − a)P(η > t − X | X) ,

where
E I(X

t − a)P(η > t − X | X)

γE (t − X)−2p I(X

t − a) .

γE (t − X)−2p I(X

t − a) ,

Therefore,
P(X

t − a) − P(X + η

t)

+

which implies


P(X


t − a) − P(X + η

t)

+

dt

−∞




γE (t − X)

−2p

I(X

(t − X)−2p I(X

t − a) dt = γE

−∞

t − a) dt

−∞



(t − X)−2p dt

= γE

= (2p − 1)

X+a

γ
a2p−1

.

(2.3)

On the other hand,
P(X + η

t) = E I(X

t − a)P(η

+ E I(t − a < X

t − X | X)
t + a)P(η

+ E I(X > t + a)P(η


t − X | X)

t − X | X)

P(X

t − a) + P(t − a < X

t + a) + E I(X > t + a)P(η

t − X | X)

P(X

t − a) + P(t − a < X

t + a) + γE (t − X)−2p I(X > t + a) ,

which implies that
P(X + η

t) − P(X

t − a)

P(t − a < X

Hence,



P(X
−∞
Lith. Math. J., 54(1):48–60, 2014.

t − a) − P(X + η

t)



dt

t + a) + γE (t − X)−2p I(X > t + a) .


52

L.V. Dung, T.C. Son, and N.D. Tien




P(t − a < X

γE (t − X)−2p I(X > t + a) dt

t + a) dt +

−∞


−∞


X−a

(t − X)

= 2a + γE

−2p

I(X > t + a) dt

(t − X)−2p dt

= 2a + γE

−∞

= 2a + (2p − 1)

−∞

γ
a2p−1

.

(2.4)


Combining (2.3) and (2.4) yields


P(X

t − a) − P(X + η

t) dt

−∞




P(X

=

t − a) − P(X + η



t)

P(X

dt +

−∞


t − a) − P(X + η

t)

+

dt

−∞

2a + 2(2p − 1)

γ
a2p−1

.

(2.5)

Next, we consider the second term on the right-hand side of (2.2). We have that


0



Φ(t − a) − Φ(t) dt =
−∞

Φ(t − a) − Φ(t) dt +

−∞

Φ(t − a) − Φ(t) dt
0

0



aϕ(t) dt +
−∞

aϕ(t − a) dt

2a.

(2.6)

0

Combining (2.2), (2.3), and (2.6) yields
FX − Φ

FX+η − Φ

1

1

+ 2 (2p − 1)


γ
a2p−1

+ 2a .

Taking a = γ 1/2p gives conclusion (2.1) of Lemma 2.
Lemma 3. Let ψ be a function R → R with ψ
E ψ(X)



< ∞ and ψ

ψ

∞ |FX



< ∞. If X is a random variable, then

− Φ|1 + ψ

∞.

Proof. It is clear that


E ψ(X) − E ψ(N )




ψ(x) dFX (x) −

=
−∞



ψ(x) dΦ(x) =

−∞

FX (x) − Φ(x) ψ (x) dx
−∞



ψ


−∞

FX (x) − Φ(x) dx = ψ



FX − Φ


1


L1 bounds for some martingale CLT

53

and
E ψ(N )

3

ψ

∞.

Proof of Theorem 4

Let Z1 , Z2 , . . . , Zn , η be independent normally distributed random variables with mean 0 and E(Zj2 ) = σ 2j ,
n
E(η 2 ) = n1/2 . Let Um = m−1
j=1 Xj /s and Z =
j=1 Zj . According to Lemma 2 with p = 1, we have
FS/s − Φ

1

F(S+η)/s − Φ

1


1/2

1
E η2
s2

+C

F(S+η)/s − F(Z+η)/s

1

F(S+η)/s − F(Z+η)/s



1/4

+ Cn

1

+C

1
E η2
s

.


1/2


(3.1)

On the other hand, by a proof is similar to that of Theorem 1 in [4] we get that
S+η
P
s

n

Z +η
t −P
s

t

|Xm |3
ϕ
λm s 3

E
m=1
n

E

+

m=1

|Zm |3
ϕ
λm s 3

where 0 θm , θm 1 and λm = ( nj=m+1 σ 2j + n1/2 )/s.

Applying the Fubini theorem and noting that −∞ |ϕ (t)| dt


S+η
s

P
−∞



n

|Xm |3
ϕ
λ3m s3

E
m=1 −∞
n




m=1 −∞


n

E
m=1

−∞

|Xm |3
ϕ
λ3m s3


n

E

+
m=1
n
m=1

−∞

|Xm |3
2E
λ3m s3


t

dt

+

t − Um
Zm
− θm
λm
λm s
2E

m=1

|Zm |3
λ3m s3

Combining (3.1) and (3.2) yields

The theorem is proved.
Lith. Math. J., 54(1):48–60, 2014.

1

dt

t − Um
Zm

− θm
λm
λm s

n

,

2, we have

t − Um
Xm
− θm
λm
λm s

|Zm |3
ϕ
λ3m s3

FS/s − Φ

t − Um
Zm
− θm
λm
λm s

t − Um
Xm

− θm
λm
λm s

|Zm |3
ϕ
λ3m s3

E

+

Z +η
s

t −P

t − Um
Xm
− θm
λm
λm s

Cn−1/4 .

dt

dt

dt


Cn−1/4 .

(3.2)


54

L.V. Dung, T.C. Son, and N.D. Tien

4

Proof of Theorem 5

For n ∈ N, s > 0, and γ > 0, let
G(s, γ) = X ∈ Mn : s(X) = s, X

γ, V 2 (X) = 1 a.s.



and
Δ(n, s, γ) = sup sup E f
f ∈Λ1

S(X)
s

− E f (N) : X ∈ G(s, γ) .


It is clear that Δ(n, s, γ) Δ(n − 1, s, 2γ).
For a fixed element X ∈ G(s, γ), where we assume that γ 1, let Z1 , Z2 , . . . , Zn be i.i.d. standard normal
variables, and let η be a centered normal r.v. with variance κ2 such that η is independent of anything else. The
variance κ2 will be specified later, but in any case, κ2 > 2γ 2 .
Let
m−1
j=1 Xj

Um =

n
j=m+1 σj Zj



λ2m

n
2
j=m+1 σj
s2

+ κ2

.
s
s
Conditioned on σ(Fn+1 , Zm ), Wm is normally distributed with mean 0 and variance λ2m , and Z =
n
j=1 σj Zj /s is a standard normal variable. Hence, by Lemma 2 we have

,

Wm =

FS/s − Φ

F(S+η)/s − F(

1

n
j=1

,

=

σj Zj +η)/s 1

κ
+C .
s

(4.1)

Now we consider the first term on the right-hand side of (4.1). Let ϕλm (x) be the density function of Wm .
For any 1-Lipschitzian f , according to an idea that goes back to Lindeberg [10], we write
E f

S+η

s

−E f

n
j=1 σj Zj

s

n

=

E f Wm + U m +

Xm
s

− E f Wm + U m +

E f ∗ ϕλ m U m +

Xm
s

− E f ∗ ϕλ m U m +

m=1
n


=
m=1
n

E gm Um +

=
m=1
n

E

=
m=1

+



Xm
s

− E gm Um +



σm Z m
s

σm Z m

s

(where gm = f ∗ ϕλm )

2 Z2 − X2
σm
m
m
gm (Um )
s2

σm Z m − X m
gm (Um ) −
s

θm σm Z m
(σm Zm )3
gm Um −
3
s
s

σm Z m
s

3
Xm
θ Xm
gm Um − m
3

s
s

.

Since Um and λm are F m−1 -measurable, where F m−1 is the completion of Fm−1 , from E(Xm | F m−1 ) =
2 Z 2F
2
2
E(σm Zm | F m−1 ) = 0 a.s. and E(σm
j m−1 ) = E(Xj | F m−1 ) = σj a.s. it follows that the first two sums
in the above expression must vanish. Moreover, since gm (x) = f ∗ ϕλm (x), we get that
S+η
E f
s

−E f

n
j=1 σj Zj

s




L1 bounds for some martingale CLT
n

|σm Zm |3

s3

E
m=1
n

1
6s3
+

:=

m=1

θσm Zm
s

E |σm Zm |3 f ∗ ϕλm Um −

m=1

+E

3|
|Xm
s3

gm Um +

θm Xm

s

θm Xm
s

E |Xm |3 f ∗ ϕλm Um −

n

1
6s3

gm Um +

55

θm σm Z m
s

1
(I + II).
6s3

(4.2)

We define the sequence of stopping times τj (1
k

τ0 = 0,


τj = inf k:

j

n) by

j
n

σi2

i=1

for 1

j

n − 1,

τn = n.

θm Xm
s

.

Then
n
m=1


E |Xm |3 f ∗ ϕλm Um −
τj

n

E

=

m=τj−1 +1

j=1

If τj−1 < m

θm Xm
s

|Xm |3 f ∗ ϕλm Um −

(4.3)

τj , then
λ2m

n
2
j=τj +1 σj
s2
n


λ2m

+ κ2

s2 − js2 /n − γ 2 + κ2
:= λ2j ,
s
s2 − (j − 1)s2 /n + κ2
2
:= λj .
2
s

σj2 + κ2 s2

j=τj−1 +1

We denote Rm =
be defined by

m−1
i=τj−1 +1 Xi

and Amt = {|Rm |
ψ(x) = sup

|Uτj−1 +1 − t|/2} for t ∈ R. Let the function ψ : R → R

ϕ (y) : |y|


|x|
−1 .
2

We conclude that, for every t,
ϕ

holds on Amt ∩ {τj−1 < m
τj

E
m=τj−1 +1
Lith. Math. J., 54(1):48–60, 2014.

U m − t θm Xm

λm
λm s

ψ

τm }. Then,
|Xm |3 f ∗ ϕλm Um −

θm Xm
s

Uτj−1 +1 − t
λj



56

L.V. Dung, T.C. Son, and N.D. Tien
τj

γE
m=τj−1 +1

2
Xm
f ∗ ϕλ m U m −


τj

γE
m=τj−1 +1
τj

γE
m=τj−1 +1

2
Xm

2
Xm




m=τj−1 +1

γE
m=τj−1 +1



m=τj−1 +1

2
Xm

τj
m=τj−1 +1

2
Xm

ψ

Uτj−1 +1 − t
λj

−∞

f (t) dt




m=τj−1 +1

m=τj−1 +1

U m − t θm Xm

f (t) IAcmt dt
λm
λm s

λ−3
m ϕ

−∞


τj

τj

γλ−3
j E

U m − t θm Xm

f (t) IAmt dt
λm
λm s


λ−3
m ϕ

−∞

+ γE

+

−∞

2
Xm

τj

γλ−3
j E

U m − t θm Xm

f (t)IAcmt dt
λm
λm s

λ−3
m ϕ




τj

+

2
Xm

θm Xm
− t f (t) dt
s

U m − t θm Xm

f (t)IAmt dt
λm
λm s

λ−3
m ϕ
−∞

+ γE

γλ−3
j E

ϕλ m U m −
−∞



τj

γλ−3
j E

θm Xm
s

2
Xm

IAcmt dt

−∞


2
Xm

ψ

Uτj−1 +1 − t
λj

−∞

f (t) dt




τj
m=τj−1 +1

2
Xm

IAcmt dt

−∞

:= γλ−3
j (Mj + Nj ).

(4.4)

We first consider Mj . Since Uτj−1 +1 is Fτj−1 -measurable, we obtain


Mj =

τj

E
m=τj−1 +1

−∞


τj


E
−∞


ψ

−∞

Uτj−1 +1 − t
λj

Uτj−1 +1 − t
λj

Uτj−1 +1 − t

m=τj−1 +1

E ψ

=

2
Xm
ψ

λj

f (t) dt


2
E Xm
Fτj−1

τj
m=τj−1 +1

2
E Xm
Fτj−1

f (t) dt

f (t) dt


L1 bounds for some martingale CLT




2

Uτj−1 +1 − t

E ψ


2


= 2γ E

f (t) dt

λj

−∞

Uτj−1 +1 − t

ψ

f (t) dt = 2γ 2 E gj (Uτj−1 +1 ) ,

λj

−∞

57

where


gj (x) =

x−t
λj

ψ
−∞


f (t) dt.

Now, since
n

n

Xi2

E

Fτj−1

σi2 Fτj−1

=E

i=τj−1 +1

s2 1 −

i=τj−1 +1

j−1
n

a.s.,

from Lemma 3 we obtain

E gj (Uτj−1 +1 )

C

FS/s − Φ

1

j−1
+ λj .
n

+

1−

1−

j−1
+ λj .
n

Hence,
Mj

Cγ 2

FS/s − Φ

1


+

(4.5)

Next, we consider Nj . Let
k

Bj =

Xi >

max

τj−1
i=τj−1 +1

|Uτj−1 +1 − t|
.
2

Since Am is (Fm−1 ∨ Fτj−1 )-measurable, we have


τj

Nj =

E

m=τj−1 +1

−∞


2
σm
IAcm





2

E min 1,

−∞




2

E min 1,

−∞


Cγ 2


E min 1,

−∞
Lith. Math. J., 54(1):48–60, 2014.

dt



2

P(Bj ) dt

−∞

|Uτj−1 +1 − t|

−2

2λj
|Uτj−1 +1 − t|
2λj
|Uτj−1 +1 − t|
2λj

−2

2


k

E

Xi

max

τj−1 τj

E

i=τj−1 +1

Xi2

Fτj−1

i=τj−1 +1
−2

Fτj−1

dt = Cγ 2 E hj (Uτj−1 +1 ) ,

dt

dt



58

L.V. Dung, T.C. Son, and N.D. Tien

where


hj (x) =

−2

x−t
2λj

min 1,
−∞

dt.

By Lemma 2 we obtain
Cγ 2

Nj

FS/s − Φ

1−

+


1

j−1
+ λj .
n

(4.6)

Combining (4.3), (4.4), and (4.5) with (4.6) yields
C γ 3 n κ2 − 2γ 2

I

−1/2

Δ(n, s, γ) + γ 3 n log n .

(4.7)

Next, we need to derive a bound for II on the right-hand side of (4.2).
For t ∈ R, put
A˜mt =

|Uτj−1 +1 − t|
,
4

|Rm |


Bmt =

|Uτj−1 +1 − t|
.
8

σm |Zm |

Then
τj

E
m=τj−1 +1

3
σm
|Zm |3 f ∗ ϕλm Um −

θm σn Z m
s



τj

E
m=τj−1 +1

3
σm

|Zm |3

τj

E
m=τj−1 +1

ϕλ m U m −
−∞

3 |Z |3
σm
m
3
λm

τj

+E
m=τj−1 +1



ϕ
−∞

3 |Z |3
σm
m
λ3m


θm σn Z m
− t f (t) dt
s

Um − t θm σn Zm /s

f (t)IAmt ∩Bmt dt
λm
λm



τj

IA˜cmt dt

+E
m=τj−1 +1

−∞

3 |Z |3
σm
m
λ3m


c
IBmt

dt .

−∞

Making use of the independence of random variables {Zm }, the first and second sums can be estimated as
above. As for the third sum, note that
τj

E
m=τj−1 +1

3 |Z |3
σm
m
3
λm s 3


c
IBmt
dt

γλ−3
j E

−∞

τj
m=τj−1 +1



2
σm
|Zm |3
−∞

cγ 3 λ−3
j E ψ (Uτj−1 +1 ) ,

where


ψ (x) =

g
−∞

x−t
λj

dt

I{8|Zm |>λ−1 |Uτ
j

j−1 +1

−t|}

dt



L1 bounds for some martingale CLT

59

and

3

g (x) = E |Zm | I{|Zm |

t2 P |Zm | > t dt + |x|3 P |Zm | > |x| .

=2

|x|}

|x|

Using this expression as above, we obtain
C γ 3 n κ2 − 2γ 2

II

−1/2

Δ(n, s, γ) + γ 3 n log n .

Combining this with (4.1), (4.2), and (4.7), we obtain

Cs−3 γ 3 n κ2 − 2γ 2

Δ(n, s, γ)

−1/2

Taking κ2 = 2γ 2 + C 2 22 γ 6 n2 and putting Kn = supγ

κ
Δ(n − 1, s, 2γ) + γ 3 n log n + C .
s

1, 0
1
Kn−1 + C
2

for n

lim sup Kn

C.

Kn

(4.8)

Δ(n, s, γ)/γ 3 s−3 n log n, from (4.8) we get
4


and, hence,
n→∞

Therefore, the theorem is proved.

5

Proof of corollaries

Let X ∈ Mn with X ∞ γ , and let a = s2 V 2 (X) − s2 ∞ . We define Xn+1 , . . . , Xn+[2a/γ 2 ]+1 as in the
proof of Corollary 1 in [4].
By using the triangle inequality and Lemma 3 we have the following inequalities:
FS/s − Φ

1


F −Φ
s S/ˆs

Fˆ − Φ
s S/ˆs

+ FsˆN/s − Φ 1

a

(by Lemma 3)
+C

−1
1+C
s
s

sˆ n
ˆ
a

ˆ γ 3 log n
C
+
(by Theorem 2)
+
−1
3
s

s
s


γ 3 n log n
a
a
if
C
+
1 and n is sufficiently large.
3

s
s
s
1

Therefore, in this case, we obtain
FS/s − Φ

1


γ 3 n log n
a
C
+
.
3
s
s

(5.1)


Estimate (5.1) is also true for all n and a/s > 1 if C is suitably chosen.
ˆ =
For the estimate V 2 − 1 1 , we again let X = (X1 , . . . , Xn ) ∈ Mn with X
γ and define X
ˆ
ˆ
(X1 , . . . , X2n ) as in the proof of Corollary 1 in [4].

Then, by Theorem 2, FS/s
− Φ
Cγ 3 n log n/s3 , and by Burkholder’s inequality (see, e.g.,
ˆ
Lith. Math. J., 54(1):48–60, 2014.


60

L.V. Dung, T.C. Son, and N.D. Tien

[9, Thm. 2.11]) it is easy to see that
E(Sˆ − S)2p

p

CE s2 V 2 − s2 .

(5.2)

For any x > 0, we have


FS/s − Φ

1

FS/s
ˆ −Φ


1

+
−∞


P
s

C
E(Sˆ − S)2p + 2x
x2p−1 s2p
E|V 2 − 1|p
FS/s

Φ
+
C
+ 2x,
1
ˆ
x2p−1
FS/s
ˆ −Φ

1



S

t+x −P
s

+

t

dt +

Φ(t + x) − Φ(t) dt

−∞

(by (2.5) and (2.6))

(5.3)

and now putting x = (E V 2 − 1 )1/2p , we have
|FS/s − Φ|1

C

γ 3 n log n
+C E V2−1
s3

Combining this with (5.1) yields the conclusion of Corollary 2.
Substituting, instead of (5.2), the inequality E(Sˆ − S)2p
immediately get the conclusion of Corollary 3.


1/2p

.

C(E|s2 V 2 − s2 |p + 2γ 2p ) into (5.3), we

Acknowledgment. We would like to express our gratitude to the referee for his/her detailed comments and
valuable suggestions, which helped us to improve the manuscript.

References
1. R.P. Agnew, Global versions of the central limit theorem, Proc. Natl. Acad. Sci. USA, 40(9):800–804, 1954.
2. V. Bentkus, A new method for approximation in probability and operator theories, Lith. Math. J., 43(4):367–388,
2003.
3. V.I. Bogachev, Measure Theory, Vol. 1, Springer-Verlag, Berlin, 2007.
4. E. Bolthausen, Exact convergence rates in some martingale central limit theorems, Ann. Probab., 10(3):672–688,
1982.
5. J. Dedecker and E. Rio, On mean central limit theorems for stationary sequences, Ann. Inst. Henri Poincaré, Probab.
Stat., 44(4):693–726, 2008.
6. R.M. Dudley, Real Analysis and Probability, Cambridge Univ. Press, Cambridge, 2004.
7. C.G. Esseen, On mean central limit theorems, Kungl. Tekn. Högsk. Handl., Stockholm, 121(3):1–30, 1958.
8. L. Goldstein, Bounds on the constant in the mean central limit theorem, Ann. Probab., 38(4):1672–1689, 2010.
9. P. Hall and C.C. Heyde, Martingale Limit Theory and Its Application, Academic Press, New York, 1980.
10. J.W. Lindeberg, Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung, Math. Z.,
15(1):211–225, 1922.
11. J.C. Mourrat, On the rate of convergence in the martingale central limit theorem, Bernoulli, 19(2):633–645, 2013.
12. J.K. Sunklodas, Some estimates of the normal approximation for independent non-identically distributed random
variables, Lith. Math. J., 51(1):66–74, 2011.
13. J.K. Sunklodas, Some estimates of the normal approximation for ϕ-mixing random variables, Lith. Math. J.,
51(2):260–273, 2011.