Tải bản đầy đủ (.pdf) (16 trang)

Negative association and negative dependence for random upper semicontinuous functions, with applications

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (292.22 KB, 16 trang )

Negative association and negative dependence for
random upper semicontinuous functions, with
applications

a

Nguyen Tran Thuana,1 and Nguyen Van Quanga,2
Department of Mathematics, Vinh University, Nghe An Province, Viet Nam

Abstract. The aim of this paper is to discuss the notions of negative association and negative dependence for random upper semicontinuous functions. Besides giving some properties for these notions,
we obtain inequalities which form maximal inequality and H´ajeck-R´enyi’s type inequality. Also, some
laws of large numbers are established under various settings and they are extensions for corresponding
ones in the literature.
Mathematics Subject Classifications (2010): 60B99, 60F15, 28B20.
Key words: Random upper semicontinuous functions, fuzzy random sets, level-wise negatively associated, level-wise negatively dependent, law of large numbers.

1

Introduction

Let (Ω, F, P ) be a complete probability space and {Yn , n 1} be a collection of random variables
defined on it. The consideration of independent relation or dependent relation for {Yn , n 1} plays
an important role in probability and statistics. The independence of random variables is the strong
property, and in fact, many phenomena in practice usually depend on each other. Therefore, besides
studying the independent random variables, many kinds of dependence have been considered, such
as martingale, m-dependence, ρ-mixing, ϕ-mixing, negative dependence, positive dependence, etc.
One of the dependence structures for collection of real-valued random variables that has attracted
the interest of many authors is negative association. The definition of negative association was first
introduced by Alam and Saxena [1] and carefully studied by Joag-Dev and Proschan [10]. A finite
family {Yi , 1 i n} of real-valued random variables is said to be negatively associated (NA) if for
any disjoint subsets A, B of {1, 2, ..., n} and any real coordinatewise nondecreasing functions f on


R|A| , g on R|B| ,
Cov f (Yi , i ∈ A), g(Yj , j ∈ B)

0

whenever the covariance exists, where |A| denotes the cardinality of A. An infinite family of random
variables is NA if every finite subfamily is NA.
The next dependence notion is extended negatively dependent which was introduced by Liu [16] as
follows. A finite family of real-valued random variables {Yi , 1 i n} is said to be extended negatively
dependent (END) if there is some M > 0 such that the two following inequalities hold
n

P (Y1 > x1 , . . . , Yn > xn )

M

P (Yi > xi ),

(1.1)

P (Yi

(1.2)

i=1
n

P (Y1

x1 , . . . , Y n


xn )

M
i=1

1 Email:
2 Email:




1

xi ),


for all xi , i = 1, . . . , n. An infinite family of random variables is END if every finite subfamily is END.
If two inequalities (1.1) and (1.2) hold with M = 1 then the sequence {Yi , 1 i n} is call negatively
dependent (ND), which was introduced by Lehmann [15]. Thus, END is an extension of ND. A family
{Yn , n 1} is pairwise negatively dependent (PND) if P (Yi > x, Yj > y) P (Yi > x)P (Yj > y) (or
equivalently, P (Yi
x, Yj
y)
P (Yi
x)P (Yj
y)) for all i = j and all x, y ∈ R. This follows
that PND is also an extension of ND. Note that NA implies ND, but ND does not imply NA. For the
details, the reader can refer to [10, 23]. These notions were extended when considering them in more
abstract spaces, such as the notion of NA in Rd [2], and recently, NA in Hilbert space [14]. In this

paper, we will consider above dependence notions in space of upper semicontinuous functions.
The upper semicontinuous (u.s.c.) functions are very useful in many contexts such as in optimization theory, image processing, spatial statistics, etc. In various settings, the u.s.c. functions appear
under different names. For instance, they are also called fuzzy sets [5, 6, 9, 11, 13, 22] or grey-scale
images [19, 25]. Random u.s.c. functions were introduced to model random elements which their values
are u.s.c. functions. Up to now, many authors have been concerning with limit theorems for class of
random u.s.c. functions, especially, many laws of large numbers were established in various settings
(for example, see [5, 6, 7, 11, 13, 20, 21, 22, 26]).
However, to the best of our knowledge, many laws of large numbers were obtained for independent
random u.s.c. functions (or independent fuzzy random sets). Some few ones are known for dependent
case, such as Ter´
an [26] gave a strong law of large numbers for random u.s.c. functions under exchangeability conditions; Fu [7], Quang and Giap [21] obtained some strong laws of large numbers for
ϕ(ϕ∗ )-mixing dependence fuzzy random sets; recently, Quang and Thuan [22] got some strong limit
theorems for adapted arrays of fuzzy random sets. The aim of this paper is to propose some new
kinds of dependence for random u.s.c. functions which rely on the NA, ND notions mentioned above,
and then establish several laws of large numbers. We also show that our results are generalizations of
corresponding ones in the literature. The layout of this paper is as follows: in Section 2, we summarize
some basic definitions and related properties. Section 3 will discuss the notion of NA and Section 4
will present the notion of END, PND, ND in space of u.s.c. functions. In addition, we give some inequalities which form H´
ajeck-R´enyi’s type inequality for these notions and some laws of large numbers
will be established.

2

Preliminaries

Let K be the set of nonempty convex and compact subsets of R. If a is an element of K then it
will be an interval of R, which will be denoted by a = a(1) ; a(2) (where a(1) , a(2) are two end points).
The Hausdorff distance dH on K is defined by
dH (a, b) = max


a(1) − b(1) ; a(2) − b(2) , a, b ∈ K.

It was known that (K, dH ) is a separable and complete metric space. A linear structure in K is defined
as follows: for a, b ∈ K, λ ∈ R then
a + b = {x = y + z : y ∈ a, z ∈ b} = a(1) + b(1) ; a(2) + b(2) ,
λa = {λx : x ∈ a} =

λa(1) ; λa(2)
λa(2) ; λa(1)

if λ 0
.
if λ < 0

For a function u : R → [0; 1], then for each α ∈ (0; 1], the α-level sets of u are defined by
[u]α = {x ∈ R : u(x)
α}. It is easy to see that [u]α = ∩β<α [u]β . For each α ∈ [0; 1), [u]α+
denotes the closure of {x ∈ R : u(x) > α}, and equivalently [u]α+ = cl{∪1 β>α,β↓α [u]β }. In particular,
[u]0+ is called the support of u and denoted by supp u. For convenience, we also use [u]0 to indicate
supp u, it means [u]0 = [u]0+ = supp u. Note that all level sets [u]α , α ∈ (0; 1] of u are closed if
and only if u is u.s.c. Recall that an u.s.c. function u : R → [0; 1] is called quasiconcave function if
u(λx + (1 − λ)y) min{u(x), u(y)}, and its equivalent condition is that [u]α is a convex set of R for
every α ∈ (0; 1]. Let U denote the family of all u.s.c. functions u : R → [0; 1] satisfying the following
conditions
2


(i) supp u is compact;
(ii) [u]1 = ∅;
(iii) u is quasiconcave.

(1)
(2)
Therefore, if u ∈ U then for each α ∈ (0; 1], [u]α is an interval of R and denoted by [u]α = [u]α ; [u]α ,
(1)
(2)
where [u]α and [u]α are two end points of the interval. Moreover, it is clear that for each α ∈ [0; 1),
(1)
(2)
(1)
(1)
uα+ is also an interval of R. We denote [u]α+ = [u]α+ ; [u]α+ , where [u]α+ = lim1 β>α,β↓α [u]β and
(2)

(2)

[u]α+ = lim1 β>α,β↓α [u]β .
Note that in the other setting, the range of u is not always equal to [0; 1] and the above condition
(ii) does not necessarily hold. The following proposition will summarize the properties of an element
u ∈ U.
(1)

Proposition 2.1. (Goetschel and Voxman [9]). For u ∈ U, denote u(1) (α) = [u]α and u(2) (α) =
(2)
[u]α by considering them as functions of α ∈ [0; 1]. Then the following hold:
(1) u(1) is a bounded nondecreasing function on [0; 1].
(2) u(2) is a bounded nonincreasing function on [0; 1].
(3) u(1) (1) u(2) (1).
(4) u(1) (α) and u(2) (α) are left continuous on (0; 1] and right continuous at 0.
(5) If v (1) and v (2) satisfy the above (1)-(4), then there exists a unique v ∈ U such that [v]α =
(1)

v (α) ; v (2) (α) for all α ∈ [0; 1].
The above proposition show that an element u ∈ U is completely determinate through its whole
α-level sets.
The addition and scalar multiplication on U are defined by
(u + v)(x) = sup min{u(y), v(z)}, (λu)(x) =
y+z=x

u(λ−1 x)
0

if λ = 0
if λ = 0,

where u, v ∈ U, λ ∈ R and 0 = I{0} is the indicator function of {0}. Then for u, v ∈ U, λ ∈ R we have
[u + v]α = [u]α + [v]α and [λu]α = λ[u]α for each α ∈ (0; 1].
The following metrics on U are often used: for u, v ∈ U,
D∞ (u, v) = sup dH ([u]α , [v]α ) = sup dH ([u]α , [v]α ),
α∈(0;1]

α∈[0;1]

1

Dp (u, v) =
0

dpH ([u]α , [v]α )dα

1/p


, 1

p < ∞.

It is clear to see that
Dp (u, v)

Dq (u, v)

D∞ (u, v)

for all u, v ∈ U and 1
p
q < ∞. It is known that metric space (U, D∞ ) is complete but not
separable and (U, Dp ) is separable but not complete. Moreover, when considering u.s.c. functions on
R, the following metric also is used
1

D (u, v) = dH

1

[u]α dα,
0

1

1

(1)


1

[v]α dα ,
0

(2)

where 0 [u]α dα = 0 [u]α dα ; 0 [u]α dα is an element of K. By simple estimations, we can see that
D (u, v) Dp (u, v) for all u, v ∈ U and for all p 1. For u ∈ U, denote u ∞ = D∞ (u, 0), u p =
Dp (u, 0) and u = D (u, 0).
We define the mapping ., . : K × K → R by the equation
a, b =

1 (1) (1)
a b + a(2) b(2) , where a = a(1) , a(2) , b = b(1) , b(2) .
2

For a, b ∈ K, we define
d∗ (a, b) =

a, a − 2 a, b + b, b =

1
2
3

a(1) − b(1)

2


+ a(2) − b(2)

2

1/2

,


and this implies that d∗ is a metric on K. On the other hand, we have following estimation
d2H (a, b)

2d2∗ (a, b)

2d2H (a, b) for all a, b ∈ K

and hence, the two metrics dH and d∗ are equivalent. The metric space (K, d∗ ) is complete and
separable and this one is implied by the completeness and separation of (K, dH ).
Define ., . : U × U → R by the equation
1

u, v =

[u]α , [v]α dα.
0

It is easy to see that the mapping ., . has the following properties: (i) u, u
0 and u, u = 0 ⇔ u =
0; (ii) u, v = v, u ; (iii) u + v, w = u, w + v, w ; (iv) λu, v = λ u, v ; (v) | u, v |

u, u v, v ,
where u, v, w ∈ U and λ 0.
For u, v ∈ U, define
1

D∗ (u, v) =

d2∗ ([u]α , [v]α )dα

u, u − 2 u, v + v, v =

1/2

.

0

It is clear that D∗ is a metric on U, moreover, it follows from the relation of dH and d∗ that
D22 (u, v)

2D∗2 (u, v)

2D22 (u, v).

So, D2 and D∗ are two equivalent metrics. We also deduce that (U, D∗ ) is separable metric space but
not complete. For u ∈ U, denote u ∗ = D∗ (u, 0).
A mapping X : Ω → K is called a K-valued random variable if X −1 (B) ∈ F for all B ∈ B(K),
where B(K) is the Borel σ-algebra on (K, dH ). A mapping X : Ω → U is called a U-valued random
variable (or random u.s.c. function) if [X]α is K-valued random variable for all α ∈ (0; 1]. Note that
this condition is equivalent with the one that X is (U, Dp )-valued random variable for any p 1, i.e.

X −1 (B) ∈ F for all B ∈ B(U, Dp ), where B(U, Dp ) is the Borel σ-algebra on (U, Dp ). For q > 0,
denote Lq (U) by the class of U-valued random variables X satisfying E X q < ∞ (where the symbol
represents the ∞, p, ∗, ). It is easy to see that Lq∞ (U) ⊂ Lqp (U) ⊂ Lq2 (U) = Lq∗ (U) ⊂ Lqr (U) ⊂ Lq (U)
for all q > 0, 1
r
2
p < ∞. If X ∈ L1∞ (U) then X is called D∞ -integrable, this implies that
(2)
(1)
[X]α and [X]α are integrable real-valued random variables for all α ∈ (0; 1].
Let X, Y be two U-valued random variables, then X and Y are called level-wise independent if for
each α ∈ (0; 1], the σ-algebra σ([X]α ) and σ([Y ]α ) are independent; X and Y are called independent
if the σ-algebra σ([X]α : 0 < α 1) and σ([Y ]α : 0 < α 1) are independent; the independence and
level-wise independence for arbitrary collection of U-valued random variables are defined as usual; X
and Y are called level-wise identically distribution if for each α ∈ (0; 1], [X]α and [Y ]α are identically
(2)
(1)
(2)
(1)
distribution, this is equivalent with [X]α ; [X]α and [Y ]α ; [Y ]α being identically distributed
random vectors. It is clear that independence implies level-wise independence, however the example
below will show that the inverse is not true.
Example 2.2. Let X, Y be two independent random variables such that X has uniform distribution
on [−1; 0] and Y has uniform distribution on [0; 1] (i.e. X ∼ U[−1;0] , Y ∼ U[0;1] ). We construct two
U-valued random variables F1 , F2 as follows:

1
x+1

if − 1 x < X(ω)


2 . X(ω)+1


1
if X(ω) x < Y (ω)
F1 (ω)(x) = 21 1 x−Y (ω)

+
.
if
Y (ω) x 1

2
2 1−Y (ω)


0
if x < −1 or x > 1
and
F2 (ω)(x) =


1
x+1


2 . 1−Y (ω)



1
2
1


2

+

if − 1

x < −Y (ω)

if − Y (ω)
if − X(ω)

1 x+X(ω)
2 . 1+X(ω)


0

x < −X(ω)
.
x 1

if x < −1 or x > 1
4



By simple calculations, we have
[F1 ]α (ω) = [F1 (ω)]α =

[2α (X(ω) + 1) − 1 ; 1]
[(2α − 1)(1 − Y (ω)) + Y (ω) ; 1]

if 0 < α
if 21 < α

1
2

[2α (−Y (ω) + 1) − 1 ; 1]
[(2α − 1)(1 + X(ω)) − X(ω) ; 1]

if 0 < α
if 21 < α

1
2

1

and
[F2 ]α (ω) = [F2 (ω)]α =

1

.


It is easy to see that [F1 ]α and [F2 ]α are independent for each α ∈ (0; 1] (by independence of X, Y ).
Thus, F1 , F2 are level-wise independent. However, σ{[F1 ]α , 0 < α
1} = σ{[F2 ]α , 0 < α
1} =
σ{X, Y }, so F1 and F2 are not independent. Moreover, it follows from X ∼ U[−1;0] , Y ∼ U[0;1] that
−X and Y are identically distributed. Thus, [F1 ]α and [F2 ]α are identically distributed for each
α ∈ (0; 1].
Let X be a D∞ -integrable U-valued random variable. Then, the expectation of X, denote by EX,
is defined as a u.s.c. function whose α-level set [EX]α is given by
(2)
(2)
[EX]α = [EX](1)
= E[X](1)
α ; [EX]α
α ; E[X]α

for each α ∈ (0; 1].
For X, Y ∈ L1∞ (U) ∩ L2∗ (U), the notions of variance of X and covariance of X, Y were introduced
in [6] (also see [5]) as follows:
Cov(X, Y ) = E X, Y − EX, EY

and V arX = Cov(X, X) = E X, X − EX, EX .

Now we recall some properties of the notions of variance and covariance.
Proposition 2.3. ([6]) Let X, Y be random u.s.c. functions in L1∞ (U) ∩ L2∗ (U). Then
1
(2)
(2)
(1)
(1)

(1) Cov(X, Y ) = 21 0 Cov([X]α , [Y ]α ) + Cov([X]α , [Y ]α ) dα and consequently, V ar(X) =
1
2

1
0

(1)

(2)

V ar([X]α ) + V ar([X]α ) dα = ED∗2 (X, EX).
(2) V ar(λX + u) = λ2 V arX for λ 0 and u is an element (not random) of U;
(3) Cov(λX + u, µY + v) = λµCov(X, Y ) for λ, µ 0 and u, v are elements (not random) of U;
(4) V ar(X + Y ) = V arX + V arY + 2Cov(X, Y );
(5) Cov(X, Y ) = 0 if X and Y are level-wise independent;
(6) V ar(ξX) = (V arξ)E X, X + (Eξ)2 V arX if ξ is a nonnegative real-valued random variable
and ξ, X are independent.

3

Negatively associated sequence in U

Definition 3.1. Let {Xi , 1
i
n} be a finite collection of K-valued random variables. Then,
(1)
(2)
{Xi , 1
i

n} is said to be negatively associated if {Xi , 1
i
n} and {Xi , 1
i
n} are
sequences of NA real-valued random variables. An infinite collection of K-valued random variables
is NA if every finite subfamily is NA. Let {Xn , n
1} is a sequence of U-valued random variables.
Then, {Xn , n 1} is said to be level-wise negatively associated (level-wise NA) if {[Xn ]α , n 1} are
sequences of NA K-valued random variables for all α ∈ (0; 1].
Example 3.2. (1) Let {Xn , n 1} be a collection of level-wise independent U-valued random vari(1)
(2)
ables. Then {[Xn ]α , n 1} and {[Xn ]α , n 1} are collections of independent real-valued random
variables for every α ∈ (0; 1] and hence, they are collections of NA real-valued random variables.
Therefore, {Xn , n 1} is the sequence level-wise NA U-valued random variables. On the other hand,
it is not hard to show that there exist the sequences of level-wise NA but not level-wise independent
via their end points. Thus, the class of level-wise NA U-valued random variables is actually larger
than the class of level-wise independent U-valued random variables.

5


(2) We consider the family of functions as follows: for i = 1, 2
fn(i) : [0; 1] × R → R
(α, x) → fn(i) (α, x)
(1)

- For each α ∈ [0; 1], the functions fn (α, .) are simultaneously nondecreasing (or simultaneously non(2)
increasing) for all n 1, the functions fn (α, .) are simultaneously nondecreasing (or simultaneously
(1)

(2)
nonincreasing) for all n 1, the nondecreasing or nonincreasing of fn (α, .) and fn (α, .) is independent together.
(1)
(2)
- For each x ∈ R and for each n 1, the functions fn (., x) and fn (., x) satisfy conditions (1)-(4) of
(1)
(2)
Proposition 2.1 (where fn (., x) is regarded as u(1) and fn (., x) is regarded as u(2) ).
(1)
We can point out many functions satisfying above conditions, for instance, fn (α, x) = α.ex/n ,

(2)
(2)
(1)
2
n
fn (α, x) = (2 − α).e(x+1)/n ; fn (α, x) = arctan α+n
1+n α x , fn (α, x) = π/2 + arccot α nx , etc.
Define the mappings fn : R → U as follows: for t ∈ R
[fn (t)]α = fn(1) (α, t) ; fn(2) (α, t)

for all α ∈ [0; 1].

If {Xn , n 1} is a sequence of NA real-valued random variables, then {fn (Xn ), n 1} is a sequence
of level-wise NA U-valued random variables. In special case, when fn (t) is the indicator function of
{t} (i.e. fn (t) = I{t} ∈ U, for t ∈ R and for all n 1), then fn (Xn ) = I{Xn } and [fn (Xn )]α = {Xn } =
[Xn ; Xn ] for all α ∈ [0; 1]. Hence, a sequence {Xn , n 1} of NA real-valued random variables can be
regarded as a sequence of level-wise NA U-valued random variables.
Proposition 3.3. Let X, Y ∈ L1∞ (U) ∩ L2∗ (U) be two level-wise NA U-valued random variables. Then
we have Cov(X, Y ) 0, or in other words E X, Y

EX, EY .
Proof . It follows from the property of NA real-valued random variables that
(i)
Cov([X](i)
α , [Y ]α )

0

for all α ∈ (0; 1] and for i = 1, 2. Combining with Proposition 2.2(1), this implies Cov(X, Y )
E X, Y
EX, EY .

0 and

The following theorem will establish the H´ajeck-R´enyi’s type inequality for level-wise NA U-valued
random variables and it plays the key role to establish the laws of large numbers. A version for realvalued random variables was given in [17]. Note that this result is obtained in the settings with respect
to metric D∗ .
Theorem 3.4. Let {bn , n
1} be a sequence of positive nondecreasing real numbers. Assume that
{Xn , n 1} is a sequence of D∞ -integrable, level-wise NA U-valued random variables with E Xn 2∗ <
∞, n 1. Then, for ε > 0 and any 1 m n we have
P
where Sn =

n
i=1

max

m k


1
D∗ (Sk , ESk )
n bk

ε

4
2
ε b2m

m

n

V arXi +
i=1

32
V arXj
,
2
ε j=m+1 b2j

(3.1)

Xi .

Proof . We have
P


max

m k n

P

=P

1
D∗ (Sk , ESk )
bk

max

m k

ε

1
D∗ (Sm , ESm )
n bk

1
D∗ (Sm , ESm )
bm

ε
+P
2


ε
+P
2

max

m k n

max

m k

1
D∗
n bk

1
D∗
bk

:= (I1 ) + (I2 ).
6

k

k

Xj ,
j=m+1


k

j=m+1
k

Xj ,
j=m+1

EXj

EXj
j=m+1

ε
2

ε
2


For (I1 ), by Markov’s inequality and Proposition 2.3(1), we have

(1)

b2k+m

0

1


1

0

1
=
2

i=1

d2∗ ([Sk ]α , [ESk ]α ) dα

b2k+m

0
2

(2)
+ [Sk ](2)
α − [ESk ]α
(1)

[Xj+m ]α − E[Xj+m ]α

+

2



(2)

(2)

k
j=1

2

bk+m

0

ED∗2 (Xi , EXi ).

1

1

(1)

k
j=1

1

m

Xj+m , we have


(1)
[Sk ](1)
α − [ESk ]α

2b2k+m

4
2
ε b2m

dα =

i=1
k
j=1

dα.

1} are sequences of NA real-valued random

V ar [Xi ](1)
+ V ar [Xi ](2)
α
α

D∗2 (Sk , ESk ) =

=

1} and {[Xn ]α , n


1 m

2
2
ε b2m

For (I2 ), by putting Sk =
1

V ar [Sn ](1)
+ V ar [Sn ](2)
α
α
0
(2)

Since for each α ∈ (0; 1], {[Xn ]α , n
variables,
(I1 )

1

2
4
ED∗2 (Sm , ESm ) = 2 2
2
2
ε bm
ε bm


(I1 )

[Xj+m ]α − E[Xj+m ]α

2



bk+m

(3.2)

(2)

(1)

Denote Yj− = [Xj+m ]α , Yj+ = [Xj+m ]α , we obtain
k
±
j=1 (Yj

− EYj± )

bk+m
=

=

bk+m


bj+m .
j=1

j

k

1

k

1

=

(bi+m − bi+m−1 )

bk+m

j=1

i=1

1

(bi+m − bi+m−1 )

bk+m


i=1

max


bj+m

i j k

Yj±

i

2 max

1 i k

EYj±

j=1

EYj±


bj+m

j=1

EYj±


k

+ bm
j=1

Yj± − EYj±
bj+m

Yj±

= max

1 i k

Yj± − EYj±
bj+m

k

+ bm


bj+m

i j k

Yj±

1 i k


Yj± − EYj±
bj+m
Yj±

k

Yj± − EYj±
bj+m

− EYj±

bj+m

1 j k

1 j
Yj± − EYj±
bj+m

.

(3.3)

It follows from (3.2) and (3.3) that
1
b2k+m

i


1

D∗2 (Sk , ESk )

max

1 i k

0

j=1

Yj− − EYj−
bj+m

i

2

+ max

1 i k

j=1

Yj+ − EYj+
bj+m

2


Therefore,
max

m k

1 2
D∗
n b2
k

k

j=m+1

max

0

1 i n−m

EXj

=

j=m+1
i

1

2


k

Xj ,

j=1

Yj− − EYj−
bj+m

max

1 k n−m

i

2

+

7

1
D2 (S , ESk )
b2k+m ∗ k

max

1 i n−m


j=1

Yj+ − EYj+
bj+m

2

dα.

dα.


This implies that,
(I2 ) = P

max

m k

1 2
D∗
n b2
k

k

j=m+1

1


max

1 i n−m

0
1

16
ε2
=

16
ε2

i

1

max

0 1 i n−m j=1
i

1
0

max

1 i n−m


EYj− 2


bj+m

0 1 i n−m j=1

E

− EYj−
bj+m

Yj−

max



j=m+1

j=1

i

ε2
4

EXj

Yj−


i

P 2

P

k

Xj ,

max

1 i n−m

j=1

max

j=1

i

i

(1)

Yj+ − EYj+ 2

bj+m


ε2
16

Yj+ − EYj+ 2
dαdP
bj+m

max

1 i n−m

ε2
4



0 1 i n−m j=1

1 i n−m

+E

2

max

+P
i


+

Yj+ − EYj+
bj+m
1

ε2
16

2

EYj− 2


bj+m

j=1

+



Yj− − EYj−
bj+m

Yj−

i

2


j=1

Yj+ − EYj+ 2
dα (by Fubini theorem).
bj+m

(2)

1} and {[Xn ]α , n
1} are sequences of NA real-valued
Again, for each α ∈ (0; 1], {[Xn ]α , n

−1
+
random variables, so are {b−1
Y
,
1
j
n

m}
and
{b
Y
j
n − m}. It follows from
j+m j
j+m j , 1

Theorem 2 of [24] that
16
ε2

(I2 )

=

32
ε2

1

n−m

1

n−m

0

j=1

0

j=1

E(Yj− − EYj− )2 + E(Yj+ − EYj+ )2

b2j+m

Ed2∗ ([Xj+m ]α , [EXj+m ]α )

b2j+m

n

=

ED∗2 (Xj , EXj )
32
.
2
ε j=m+1
b2j

The proof is completed.
Theorem 3.5. Let 0 < bk ↑ ∞ and let {Xn , n
1} be a sequence of D∞ -integrable, level-wise

−2
NA U-valued random variables with E Xn 2∗ < ∞, n
1 such that
k=1 bk V arXk < ∞. Put
n
Sn = i=1 Xi , then
(a) b−1
n D∗ (Sn , ESn ) → 0 a.s. as n → ∞, i.e. the strong laws of large numbers holds.
r
< ∞ for all r ∈ (0; 2).
(b) E supn 1 b−1

n D∗ (Sn , ESn )
Proof . (a) For ε > 0, by Theorem 3.4 we obtain
P

sup b−1
k D∗ (Sk , ESk )

max b−1
k D∗ (Sk , ESk )

ε = lim P
n→∞

k m

4
ε2 b2m

lim

n→∞

4
2
ε b2m

m k n
m

32

V arXi + 2
ε
i=1

m

32
V arXi + 2
ε
i=1



k=m+1

ε

n

k=m+1

V arXk
b2k

V arXk
.
b2k

Letting m → ∞ in above estimation, the first term of right hand side tends to zero by Kronecker’s
lemma, the second term tends to zero by the hypothesis. So, we get the conclusion (a).

(b) We have


E sup b−1
n D∗ (Sn , ESn )
n 1

r

P sup b−1
n D∗ (Sn , ESn )

1+

n 1

k=1


V arXn
1 + 32
b2n
n=1
8



1
< ∞.
2/r

k
k=1

k 1/r


The conclusion (b) is proved.
From Theorem 3.5, we immediately deduce the following corollary with bn = n1/t , 0 < t < 2.
Corollary 3.6. Let {Xn , n 1} be a sequence of D∞ -integrable and level-wise NA U-valued random
n
variables with E Xn 2∗ < ∞, n
1 such that supn V arXn < ∞. Put Sn =
i=1 Xi , then for
0 < t < 2,
D∗ (Sn , ESn )
→ 0 a.s. as n → ∞ and E sup
n1/t
n 1

D∗ (Sn , ESn )
n1/t

r

< ∞ for all r ∈ (0; 2).

For weak law of large numbers, we obtain following theorem.
Theorem 3.7. Let {Xn , n
1} be a sequence of D∞ -integrable, level-wise NA U-valued random
n

n
variables with E Xn 2∗ < ∞, n
1 and 0 < bn ↑ ∞. Put Sn = i=1 Xi . If b−2
n
k=1 V arXk → 0
−1
as n → ∞, then we have the weak law of large numbers bn max D∗ (Sk , ESk ) → 0 in probability as
1 k n

n → ∞, particularly, b−1
n D∗ (Sn , ESn ) → 0 in probability as n → ∞.
Proof . For ε > 0 arbitrary, by Markov’s inequality we have
2

1

P b−1
max D∗ (Sk , ESk ) > ε
n

ε2 b2n

1 k n

E

max D∗ (Sk , ESk )

1 k n


.

So, the weak law of large numbers holds if the following inequality
n

2

E

max D∗ (Sk , ESk )

C

1 k n

V arXk
k=1

holds, where C is a some constant number which does not depend on n. Indeed,
2

E

max D∗ (Sk , ESk )

1 k n

= E max D∗2 (Sk , ESk )
1 k n


1

=

1 k n



=

1
2
C
2

1

d2∗ ([Sk ]α , [ESk ]α ) dα dP

max


k

1

(1)
[Xi ](1)
α − E[Xi ]α


E max

1 k n

0

max d2∗ ([Sk ]α , [ESk ]α ) dαdP

0
2

dα +

i=1

1 n
(1)
E [Xk ](1)
α − E[Xk ]α
0 k=1

2

dα +

C
2

0 1 k n


1
2

k

1

(2)
[Xi ](2)
α − E[Xi ]α

E max
0

1 k n

2



i=1

1 n
(2)
E [Xk ](2)
α − E[Xk ]α

2




0 k=1

(by Theorem 2 of Shao [24])
n

ED∗2 (Xk , EXk ).

=C
k=1

This completes the proof.

4

Negatively dependent sequence in U

In this section, we discuss the notion of negatively dependent, extended negatively dependent and
pairwise negatively dependent for random variables taking values in U.
Definition 4.1. Let {Xi , 1
i
n} be a finite collection of K-valued random variables. Then,
{Xi , 1 i n} is said to be negatively dependent (extended negatively dependent, pairwise negatively
(1)
(2)
dependent) if {Xi , 1
i
n} and {Xi , 1
i
n} are sequences of ND (resp. END, PND)

real-valued random variables. An infinite collection of K-valued random variables is ND (END, PND)
if every finite subfamily is ND (resp. END, PND). Let {Xn , n
1} is a sequence of U-valued
random variables. Then, {Xn , n 1} is said to be level-wise ND (level-wise END, level-wise PND) if
{[Xn ]α , n 1} are sequences of ND (resp. END, PND) K-valued random variables for all α ∈ (0; 1].
9


The illustrative examples for these notions can be constructed analogously as in Example 3.2.
Now we consider partial order relations k ( u ) on K (resp. U) defined as follows:
- For a, b ∈ K, a = a(1) ; a(2) and b = b(1) ; b(2) , then a k b if and only if a(1) b(1) and a(2) b(2) .
- For v, w ∈ U, then v u w if and only if [v]α k [w]α for all α ∈ (0; 1].
It follows from Proposition 2.1(1)-(2) that if v u w then [v]α+ k [w]α+ for all α ∈ [0; 1).
Proposition 4.2. (1) Let X, Y ∈ L1∞ (U) ∩ L2∗ (U) be two level-wise PND U-valued random variables.
Then we have Cov(X, Y ) 0, or in other words E X, Y
EX, EY .
(2) If X, Y are two U-valued random variables satisfying that for each α ∈ (0; 1],
P [X]α

k

a, [Y ]α

k

b

P [X]α

k


a P [Y ]α

k

b

for all a, b ∈ K, then X, Y are level-wise PND.
Proof . (1) From Proposition 3.3, this conclusion is clear.
(2) For each α ∈ (0; 1] and for any x, y ∈ R, we have


P

[X](1)
α

x, [Y

](1)
α

y =P

[X]α

k

[x; n], [Y ]α


k

[y; n]

= lim P [X]α

k

[x; n], [Y ]α

k

[y; n]

lim P [X]α

k

[x; n] P [Y ]α

n=1
n→∞
n→∞

= P [X](1)
α

x P [Y ](1)
α


k

[y; n]

y .

Similarly,
P [X](2)
α

x, [Y ](2)
α

P [X]α

k

y = P [X]α

[x; x] P [Y ]α

k

k

[x; x], [Y ]α

[y; y] = P

[X](2)

α

k

[y; y]

x P [Y ](2)
α

y .

Combining above arguments, we obtain the conclusion (2).
By applying the corresponding results for sequence of ND real-valued random variables (Theorem
3.1 and Corollary 3.2 of [12]) and revising the steps of proof similarly with the one in Theorem 3.4
above, we get the following theorem.
Theorem 4.3. Assume that {Xn , n 1} ⊂ L1∞ (U) ∩ L2∗ (U) is a sequence of level-wise ND U-valued
n
random variables. Put Sn = i=1 Xi . Then, there exists a constant C which does not depend on n
such that
n

ED∗2 (Sn , ESn )

C

V arXi ,
i=1

E
P


max

m k

max D∗2 (Sk , ESk )

1 k n

1
D∗ (Sk , ESk )
n bk

ε

log n
+2
log 3

2 n

32 log n
+2
ε2 log 3
C
1
log2 n 2
2
ε
bm


n

C log2 n

V arXi
i=1
2

1
b2m

m

V arXi +
i=1

m

V arXj
b2j
j=m+1

n

V arXi +
i=1

V arXi ,
i=1

n

V arXj
.
b2j
j=m+1

Theorem 4.3 provides some inequalities which are forms of maximal inequality and H´ajeck-R´enyi’s
type inequality for U-valued random variables with respect to D∗ . Therefore, by the same ways as in
Theorem 3.5 and 3.7, we deduce the following laws of large numbers.

10


Theorem 4.4. Let 0 < bk ↑ ∞ and let {Xn , n
1} be a sequence of D∞ -integrable, level-wise

−2
ND U-valued random variables with E Xn 2∗ < ∞, n
1 such that
k=1 bk V arXk < ∞. Put
n
Sn = i=1 Xi , then
(a) (bn log n)−1 D∗ (Sn , ESn ) → 0 a.s. as n → ∞, i.e. the strong laws of large numbers holds.
r
(b) E supn 1 (bn log n)−1 D∗ (Sn , ESn ) < ∞ for all r ∈ (0; 2).
Theorem 4.5. Let {Xn , n
1} be a sequence of D∞ -integrable and level-wise ND U-valued rann
dom variables with E Xn 2∗ < ∞, n
1. Assume 0 < bn ↑ ∞ and put Sn =

i=1 Xi . If
n
−2
bn
V
arX

0
as
n

∞,
then
k
k=1
(a) b−1
n D∗ (Sn , ESn ) → 0 in probability as n → ∞.
(b) (bn log n)−1 max D∗ (Sk , ESk ) → 0 in probability as n → ∞.
1 k n

In the next part, we will establish some laws of large numbers for level-wise PND and level-wise
END U-valued random variables. Since both level-wise PND and level-wise END are the extensions
of level-wise ND, these results will also obviously hold for level-wise ND U-valued random variables.
Furthermore, the metric considered here is D∞ , which is stronger than D∗ . We begin by giving some
auxiliary results.
Lemma 4.6. If {Xi , 1 i n} is a sequence of level-wise END (PND, ND) and level-wise identically
distributed U-valued random variables then {[Xi ]α+ , 1
i
n} are sequences of END (resp. PND,
ND) and identically distributed K-valued random variables for all α ∈ [0; 1).

Proof . Since the proofs for level-wise END, PND, ND are the same, we only present the proof for
level-wise END and the remain ones for level-wise PND, ND are similar.
For each α ∈ [0; 1), we choose the sequence {αm , m 1} ⊂ (0; 1) satisfying αm ↓ α and αm > α for all
(2)
(2)
(1)
(1)
m 1. Then we have [Xi ]αm ↓ [Xi ]α+ and [Xi ]αm ↑ [Xi ]α+ as m → ∞. For xi ∈ R arbitrary, we have


n
(1)

[Xi ]α+ > xi

P

n
(1)

[Xi ]α+ > xi +

=P

i=1

k=1 i=1
n

lim [Xi ](1)

αm > xi +

= lim P
k→∞

i=1

m→∞
n

[Xi ](1)
αm > xi +

= lim lim P
k→∞ m→∞

i=1

n

lim lim P [Xi ](1)
αm

=M
i=1
n

k→∞ m→∞

n


1
k
1
k

k→∞

[Xi ]α+ > xi +
i=1



[Xi ](1)
αm > xi +
i=1 m=1
n

k→∞ m→∞

i=1

n

1
> xi +
k

M


lim [Xi ](1)
αm

lim P

i=1

xi +

m→∞

k→∞

1
k

1
k

P [Xi ](1)
αm > xi +

M lim lim

1
k
1
k

n

(1)

i=1

(1)

= lim P

lim P

k→∞

(1)

lim P [Xi ]α+ > xi = M

M

n

1
k

k→∞

P [Xi ]α+ > xi .
i=1

(1)


(1)

Therefore, {[Xi ]α+ , 1 i n} satisfies inequality (1.1). In order to prove that {[Xi ]α+ , 1 i n}
(1)
satisfies inequality (1.2), by Lemma 2.2(b) of Chen et al. [3], it suffices to show that {−[Xi ]α+ , 1
i n} satisfies inequality (1.1). Indeed, for xi ∈ R arbitrary again, we obtain


n

n
(1)

−[Xi ]α+ > xi

P

− lim [Xi ](1)
αm > xi

=P
i=1

i=1

m→∞

n
m→∞


−[Xi ](1)
αm > xi
m=1 i=1

n

−[Xi ](1)
αm > xi

= lim P

n

=P

i=1

n
(1)

lim P −[Xi ](1)
αm > xi = M

M
i=1

m→∞

(1)


P −[Xi ]α+ > xi .
i=1
(1)

Thus, inequality (1.2) holds for {[Xi ]α+ , 1 i n}. It follows from above arguments that {[Xi ]α+ , 1
(2)
i n} is END for each α ∈ [0; 1). Similarly, we also prove that {[Xi ]α+ , 1 i n} is END for each
α ∈ [0; 1).
The identical distribution of {[Xi ]α+ , 1 i n} is clear via taking limit of each [Xi ]αm . The proof
is completed.
11


The following lemma is a very useful characterization D∞ -convergence due to Theorem 2 of
Molchanov [20] (the readers can refer to Proposition 1 [26]).
Lemma 4.7. Let u ∈ U. Then, there exists a countable subset A ⊂ (0; 1] such that for every {un , n
1} ⊂ U, the following are equivalent:
(i) D∞ (un , u) → 0;
(ii) dH ([un ]α , [u]α ) → 0 for all α ∈ A, and dH ([un ]α+ , [u]α+ ) → 0 for all α ∈ A\{1}.
Now we establish the Kolmogorov strong law of large numbers for level-wise PND and level-wise
identically distributed U-valued random variables.
Theorem 4.8. Let {Xn , n 1} be a sequence of level-wise PND and level-wise identically distributed
U-valued random variables. Then
D∞

1
n

n


Xi , u → 0 a.s. as n → ∞

(4.1)

i=1

for some u.s.c. function u if and only if E X1



< ∞, and for each case u = EX1 .

Proof . Since {Xn , n 1} is a sequence of PND and level-wise identically distributed U-valued random
variables, it follows from Lemma 4.6 that {[Xn ]α , n 1} (for each α ∈ (0, 1]) and {[Xn ]α+ , n 1} (for
each α ∈ [0, 1)) are sequences of PND and identically distributed K-valued random variables.
Necessity: If (4.1) holds, then for each α ∈ [0; 1] we have
1
n

n
(1)
[Xi ](1)
a.s. and
α → [u]α
i=1

1
n

n

(2)
[Xi ](2)
a.s. as n → ∞.
α → [u]α
i=1

Then the converse of Theorem 1 of Matula [18] implies that for each α ∈ [0; 1]
(1)
(2)
(2)
< ∞.
< ∞, E [X1 ](2)
[u](1)
and E [X1 ](1)
α
α
α = E[X1 ]α , [u]α = E[X1 ]α

Thus, E X1 ∞ < ∞ and by applying Proposition 2.1 we can conclude that u = EX1 .
Sufficiency: Suppose that E X1 ∞ < ∞. In order to prove (4.1), by Lemma 4.7 we will show that
1
n

dH

n

[Xi ]α , [EX]α → 0 a.s. as n → ∞ for each α ∈ (0; 1]

(4.2)


[Xi ]α+ , [EX]α+ → 0 a.s. as n → ∞ for each α ∈ [0; 1).

(4.3)

i=1

and
dH

1
n

n

i=1

For (4.2), it follows from above arguments and Theorem 1 of [18] that
dH

1
n
1
n

n

[Xi ]α , [EX]α = max
i=1
n

(1)
[Xi ](1)
+
α − [EX]α
i=1

1
n

1
n

n
(1)
[Xi ](1)
α − [EX]α ,
i=1

1
n

n
(2)
[Xi ](2)
α − [EX]α
i=1

n
(2)
[Xi ](2)

→ 0 a.s. as n → ∞.
α − [EX]α
i=1

Similarly, the conclusion (4.3) is also satisfied. By combining above parts, theorem is proved.
(1)

(2)

Remark 4.9. Denote UC = {u ∈ U : [u]α and [u]α are continuous when considered as functions
of α}. In [13], Kim obtained the sufficiency for Kolmogorov’s strong law of large numbers for Uvalued random variables with respect to metric D∞ . This result was given in the setting that random
variables are level-wise independent and level-wise identically distributed, however, an extra condition
EX1 ∈ UC was added. After that, Joo and Kim [11] gave another version of Kolmogorov’s strong
12


law of large numbers. The condition EX1 ∈ UC was removed, however this result requires random
variables being independent. In above Theorem 4.8, we only need the condition that random variables
are level-wise PND and EX1 ∈ UC is not necessarily satisfied. On the other hand, Theorem 4.8 also
generalizes Theorem 1 of Matula [18] from real-valued to U-valued random variables setting. Hence,
this theorem does not only improve but also extend those results to more general case.
The next result is given in the case that the sequence of level-wise PND U-valued random variables
is non-identically distributed. We begin by recalling the following lemma.
Lemma 4.10. Let {ξn , n
1} be a sequence of pairwise negatively dependent real-valued random

n
variables with n=1 n−2 V arξn < ∞. Then the strong law of large numbers n−1 i=1 (ξi − Eξi ) → 0
a.s. as n → ∞ holds if
(a) supn 1 E|ξn − Eξn | < ∞,

n
or (b) n−1 i=1 E|ξi − Eξi | = O(1).
Proof . By analyzing ξ = max{ξ, 0} − max{−ξ, 0} and applying Theorem 1 of [4], the conclusion (a)
holds immediately. The conclusion (b) was presented in Theorem 1 of [8].
Theorem 4.11. Let {Xn , n
1} be a sequence of level-wise PND U-valued random variables with

−2
2
(X
,
EX
)
<
∞.
Suppose that for each ε > 0, there exist n(ε) ∈ N and a partition
n
ED
n
n

n=1
0 = α0 < α1 < · · · < αm = 1 of [0; 1] such that
max dH [ESn ]αk−1 + , [ESn ]αk

1 k m

nε for all n

n(ε).


(4.4)

Then the strong law of large numbers n−1 D∞ (Sn , ESn ) → 0 a.s. as n → ∞ holds if
(a) supn 1 ED∞ (Xn , EXn ) < ∞,
n
or (b) n−1 i=1 ED∞ (Xi , EXi ) = O(1).
Proof . By condition (4.4), for ε > 0, there exist n(ε) ∈ N and a partition 0 = α0 < α1 < · · · < αm = 1
of [0, 1] such that
max dH [ESn ]αk−1 + , [ESn ]αk

1 k m

nε for all n

n(ε).

By using the property: if A1 ⊂ A ⊂ A2 , B1 ⊂ B ⊂ B2 (all sets are compact), then
dH (A, B)

max{dH (A1 , B2 ), dH (A2 , B1 )}

dH (A1 , B2 ) + dH (A2 , B1 ),

we have
sup

dH ([Sn ]α , [ESn ]α )

dH [Sn ]αk , [ESn ]αk−1 + + dH [Sn ]αk−1 + , [ESn ]αk


αk−1 <α αk

dH ([Sn ]αk , [ESn ]αk ) + dH [Sn ]αk−1 + , [ESn ]αk−1 + + 2dH [ESn ]αk−1 + , [ESn ]αk
dH ([Sn ]αk , [ESn ]αk ) + dH [Sn ]αk−1 + , [ESn ]αk−1 + + 2nε.
Therefore,
n−1 D∞ (Sn , ESn ) = max

sup

1 k m αk−1 <α αk

n−1 dH ([Sn ]α , [ESn ]α )

max n−1 dH ([Sn ]αk , [ESn ]αk ) + max n−1 dH [Sn ]αk−1 + , [ESn ]αk−1 + + 2ε.

1 k m

1 k m

For each k = 1, . . . , m, we have
n

−1

dH ([Sn ]αk , [ESn ]αk )

1
n


n

[Xi ](1)
αk



i=1

13

E[Xi ](1)
αk

1
+
n

n
(2)
[Xi ](2)
αk − E[Xi ]αk
i=1

.


For each j = 1, 2, it follows from the hypothesis that



(j) 2

(j)



Ed2H [Xn ]αk , E[Xn ]αk
n2
n=1

E [Xn ]αk − E[Xn ]αk
n2
n=1



2
ED∞
(Xn , EXn )
< ∞.
n2
n=1

If the condition (a) is satisfied, then
(j)
sup E [Xn ](j)
αk − E[Xn ]αk

n 1


sup EdH ([Xn ]αk , [EXn ]αk )

sup ED∞ (Xn , EXn ) < ∞,

n 1

n 1

or if the condition (b) is satisfied, then
1
n

n

1
n

(j)
E [Xi ](j)
αk − E[Xi ]αk
i=1

n

EdH ([Xi ]αk , [EXi ]αk )
i=1
(1)

n


1
n

(1)

n

ED∞ (Xi , EXi ) = O(1).
i=1
(2)

n

(2)

So, we deduce that both n−1
and n−1
tend to
i=1 [Xi ]αk − E[Xi ]αk
i=1 [Xi ]αk − E[Xi ]αk
zero a.s. as n → ∞ by applying Lemma 4.10. Hence, when n → ∞, n−1 dH ([Sn ]αk , [ESn ]αk ) → 0
a.s. and consequently, max1 k m n−1 dH ([Sn ]αk , [ESn ]αk ) → 0 a.s. Similarly, we also prove that
max1 k m n−1 dH [Sn ]αk−1 + , [ESn ]αk−1 + → 0 a.s. as n → ∞. Therefore,
lim sup n−1 D∞ (Sn , ESn )

2ε a.s.

n→∞

By letting ε → 0, we obtain the conclusion of theorem.

Remark 4.12. We will give some sufficient conditions for (4.4).
(1) If {Xn , n 1} is a sequence of identically distributed U-valued random variables, then for all
n 1 we have n−1 ESn = EX1 , and it follows from Lemma 3.3 of Joo and Kim [11] that {Xn , n 1}
satisfies (4.4).
(2)
(1)
(2) In the case non-identically distributed: If there exists an element u ∈ U such that [u]α , [u]α
are continuous functions with respect to α on [0; 1] and for j = 1, 2, for each α ∈ [0; 1],
1
n

n
(j)
[EXi ](j)
as n → ∞,
α → [u]α
i=1
(j)

n

(j)

then {Xn , n
1} satisfies (4.4). Indeed, put fn (α) = n−1 i=1 [EXi ]α , α ∈ [0; 1]. Note that
(j)
(j)
(j)
[EXi ]α are monotonic functions on [0; 1], so are fn (α). Moreover, fn (α) converges pointwise to a
(j)

(j)
continuous function [u]α on [0; 1], so it follows from Dini’s lemma in classical analysis that fn (α)
(j)
converges to [u]α uniformly. It means
sup
α∈(0;1]

n

1
n

(j)
[EXi ](j)
= sup
α − [u]α

fn(j) (α) − [u](j)
→ 0 as n → ∞.
α

α∈[0;1]

i=1

Now for fixed ε > 0, there exists an n1 := n1 (ε) such that
1
n

n

(j)
[EXi ](j)
α − [u]α
i=1

ε
for all α ∈ (0; 1], as n
3

n1 .

On the other hand,
sup
α∈[0;1)

1
n

n
(j)

(j)

[EXi ]α+ − [u]α+

sup
α∈(0;1]

i=1


1
n

n
(j)
[EXi ](j)
→ 0 as n → ∞,
α − [u]α
i=1

then there also exists n2 := n2 (ε) such that
1
n

n
(j)

(j)

[EXi ]α+ − [u]α+
i=1

ε
for all α ∈ [0; 1), as n
3
14

n2 .



By Lemma 3.3 [11] again, there exists a finite partition 0 = α0 < α1 < · · · < αm = 1 of [0; 1] such that
max

(1)

(2)

(2)
uαk−1 + − u(1)
αk , uαk−1 + − uαk

Thus, by triangle inequality, we have that for all n
dH [ESn ]αk−1 + , [ESn ]αk

ε
, for all k = 1, 2, . . . , m.
3
max{n1 , n2 }

dH [ESn ]αk−1 + , n[u]αk−1 +
+ dH n[u]αk−1 + , n[u]αk + dH ([ESn ]αk , n[u]αk )



for all k = 1, 2, . . . , m. Thus, {Xn : n 1} satisfies (4.4).
(3) Combining two above cases, we have a more general condition as follows: The condition (4.4)
holds if {n−1 ESn , n 1} is a convergent sequence with respect to metric D∞ .
By similar method as in Theorem 4.8, we also obtain the following theorem, and it is also extends
Theorem 1.1 of Chen et al. [3] for U-valued random variables.
Theorem 4.13. Let {Xn , n 1} be a sequence of level-wise END and level-wise identically distributed

U-valued random variables. Then
D∞

1
n

n

Xi , u → 0 a.s. as n → ∞
i=1

for some u.s.c. function u if and only if E X1



< ∞, and for each case u = EX1 .

Acknowledgments
This work was completed while the authors were visiting the Vietnam Institute for Advanced Study
in Mathematics (VIASM). We would like to thank the VIASM for their support and hospitality.

References
[1] K. Alam, K.M.L. Saxena, Positive dependence in multivariate distributions, Comm. Statist. ATheory Methods 10 (1981) 1183-1196.
[2] R. Burton, A.R. Dabrowski, H. Dehling, An invariance principle for weakly associated random
vectors, Stoch. Process. Appl. 23 (1986) 301-306.
[3] Y. Chen, A. Chen, Kai W. Ng, The strong law of large numbers for extended negatively dependent
random variables, J. Appl. Prob. 47 (2010) 908-922.
[4] N. Etemadi, On the laws of large numbers for nonnegative random variables. J. Multivariate Anal.
13 (1983) 187-193.
[5] Y. Feng, An approach to generalize laws of large numbers for fuzzy random variables, Fuzzy Sets

Syst. 128 (2002) 237-245.
[6] Y. Feng, L. Hu, H. Shu, The variance and covariance of fuzzy random variables and their applications, Fuzzy Sets Syst. 120 (2001) 487-497.
[7] K.A. Fu, A note on strong laws of large numbers for dependent random sets and fuzzy random
sets, J. Inequalities Appl. (2010) Article ID 286862, 10pages, DOI:10.1155/2010/286862.
[8] M.Yu. Gerasimov, The strong law of large numbers for pairwise negatively dependent random
variables, Moscow Univ. Comput. Math. Cybernet. 33 (2009) 51-58.
[9] R. Goetschel, W. Voxman, Elementary fuzzy calculus, Fuzzy Sets Syst. 18 (1986) 31-43.
[10] K. Joag-Dev, F. Proschan, Negative association of random variables, with applications, Ann.
Statist. 11 (1983) 286-295.
15


[11] S.Y. Joo, Y.K. Kim, Kolmogorov’s strong law of large numbers for fuzzy random variables, Fuzzy
Sets Syst. 120 (2001) 499-503.
[12] H-C. Kim, The H´
ajeck-R´enyi inequality for weighted sums of negatively orthant dependent random
variables, Int. J. Contemp. Math. Sci. 1 (2006) 297-303.
[13] Y.K. Kim, A strong law of large numbers for fuzzy random variables, Fuzzy Sets Syst. 111 (2000)
319-323.
[14] M.H. Ko, T.S. Kim, K.H. Han, A note on the almost sure convergence for dependent random
variables in a Hilbert space, J. Theor. Probab. 22 (2009) 506-513.
[15] E. Lehmann, Some concepts of dependence, Ann. Math. Statist. 37 (1966) 1137-1153.
[16] L. Liu, Precise large deviations for dependent random variables with heavy tails, Statist. Prob.
Lett. 79 (2009) 1290-1298.
[17] J. Liu, S. Gan, P. Chen, The H´
ajeck-R´enyi inequality for the NA random variables and its application, Statist. Probab. Lett. 43 (1999) 99-105.
[18] P. Matula, A note on the almost sure convergence of sums of negatively dependent random
variables, Statist. Probab. Lett. 15 (1992) 209-213.
[19] I.S. Molchanov, Grey-scale images and random sets, in “Mathematical Morphology and Its Applications to Image and Signal Processing”, Computational Imaging and Vision, Vol. 12, 247-257,
Kluwer, Dordrecht, 1998.

[20] I.S. Molchanov, On strong laws of large numbers for random upper semicontinuous functions, J.
Math. Anal. Appl. 235 (1999) 349-355.
[21] N.V. Quang, D.X. Giap, SLLN for double array of mixing dependence random sets and fuzzy
random sets in a separable Banach space, J. Convex Anal. 20 (4) (2013).
[22] N.V. Quang, N.T. Thuan, Strong laws of large numbers for adapted arrays of set-valued and
fuzzy-valued random variables in Banach space, Fuzzy Sets Syst. 209 (2012) 14-32.
[23] B.L.S. Prakasa Rao, Associated Sequences, Demimartingales and Nonparametric Inference,
Birkh¨
auser, Springer, Basel, 2012.
[24] Q.M. Shao, A comparison theorem on moment inequalities between negatively associated and
independent random variables, J. Theoret. Probab. 13 (2000) 343-356.
[25] J. Serra, Image Analysis and Mathematical Morphology, Academic Press, London, 1982.
[26] P. Ter´
an, A strong law of large numbers for random upper semicontinuous functions under exchangeability conditions, Statist. Probab. Lett. 65 (2003) 251-258.

16



×