Tải bản đầy đủ (.pdf) (13 trang)

A new extragradient iteration algorithm for bilevel variational inequalities (tt)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (141.43 KB, 13 trang )

ACTA MATHEMATICA VIETNAMICA
Volume 37, Number 1, 2012, pp. 95–107

95

A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR
BILEVEL VARIATIONAL INEQUALITIES
PHAM NGOC ANH

Abstract. In this paper, we introduce an approximation extragradient iteration method for solving bilevel variational inequalities involving two variational
inequalities and we show that these problems can be solved by projection sequences and fixed point techniques. We obtain a strong convergence of three
iteration sequences generated by this method in a real Hilbert space.

1. Introduction
Let H be a real Hilbert space with an inner product ·, · and the induced norm
· , and let C be a nonempty closed convex subset of H. We consider the bilevel
variational inequalities (shortly BV I):
Find x∗ ∈ Sol(G, C) such that F (x∗ ), x − x∗ ≥ 0 ∀x ∈ Sol(G, C),
where G : H → H, Sol(G, C) denotes the set of solutions of the following variational inequalities:
Find y ∗ ∈ C such that G(y ∗ ), y − y ∗ ≥ 0 ∀y ∈ C,
and F : C → H. We denote by Sol(BV I) the set of solutions of (BV I).
The problems (BV I) are also called to be quasivariational inequalities (see
[8, 9, 10]). There problems are very interesting because they cover a class of
mathematical programs with equilibrum constraints (see [12]), bilevel minimization problems (see [16]), variational inequalities and complementarity problems
(see [1, 2, 5, 7, 13]).
If F ≡ 0, then the bilevel variational inequalities (BV I) become the following
variational inequalities shortly V I(G, C) :
Find x∗ ∈ C such that G(x∗ ), x − x∗ ≥ 0 ∀x ∈ C.
Suppose that f : H → R. It is well-known in convex programming that if f is
convex and differentiable on Sol(G, C) then x∗ is a solution to
min{f (x) | x ∈ Sol(G, C)}


Received November 19, 2010; in revised form July 7, 2011.
2010 Mathematics Subject Classification. 65K10, 90C25.
Key words and phrases. Bilevel variational inequalities, monotonicity, Lipschitz continuous,
extragradient algorithm.
This work is supported by the Vietnam National Foundation for Science Technology Development (NAFOSTED).


96

PHAM NGOC ANH

if and only if x∗ is the solution to the variational inequalities V I ∇f, Sol(G, C) ,
where ∇f is the differentiation of f . Then the bilevel variational inequalities
(BV I) are written by a form of mathematical programs with equilibrum constraints:
min f (x)
x ∈ {y ∗ | G(y ∗ ), z − y ∗ ≥ 0 ∀z ∈ C}.
If f, g are two convex and differentiable functions, then the problems (BV I)
(where F := ∇f and G := ∇g) become the following bilevel minimization problem (see [16]):
min f (x)
x ∈ argmin{g(x) | x ∈ C}.
In recent years, variational inequalities become an attractive field for many
researchers and have many important applications in electricity markets, transportations, economics, nonlinear analysis (see [6, 9, 19]). Methods for solving variational inequalities have been studied extensively. The extragradient algorithm
for solving the variational inequalities V I(G, C) was introduced by Korpelevich
in [11], where the iteration sequence {xk } is defined by

0

x ∈ C,
y k = P rC xk − ck G(xk ) ,


 k+1
x
= P rC xk − ck G(y k ) ,

and extended by many other authors (see [5, 9, 14, 18]). One of the main conditions ensures the convergence result of this method is that the cost mapping
enjoys the Lipschitzian continuity property. However, such a condition is rather
restrictive. In order to avoid it, the following Armijo-backtracking linesearch has
been used to construct a hyperplane separating xk from the solution set. Then
the new iterate xk+1 is the projection of xk onto this hyperplane. Recently, Anh
and Kuno in [4] extended these results to generalized monotone nonlipschitzian
multivalued variational inequalities. Precisely, the authors first used the interior
proximal function to develop a convergent algorithm for the multivalued variational inequalities V I(F, C), where F is a generalized monotone multifunction.
Next the authors constructed an appropriate hyperplane which separates the current iterative point from the solution set. Then the next iterate is the projection
of the current iterate onto the intersection of the feasible set with the halfspace
containing the solution set.
Note that since the constraint set Sol(G, C) being the solution set of the problem VI(G, C) is not explicitly given, the existing algorithms for variational inequalities can not be directly applied because the subproblems can not be implemented by the available algorithms of convex programming. In this paper
we extend results in [3] to the bilevel variational inequalities (BV I), but in a
real Hilbert space. We are interested in finding a solution to bilevel variational
inequalities (BV I) where the functions F and G satisfy the following usual conditions:


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

97

(A1 ) G is monotone on C and F is β-strongly monotone on C,
(A2 ) F is L1 -Lipschitz continuous on C,
(A3 ) G is L2 -Lipschitz continuous on C,
(A4 ) The solution set of (BV I) denoted by Sol(BV I) is nonempty.
In the next section, we give a new approximation extragradient algorithm for

solving problems (BV I).
2. Preliminaries
We list some known definitions and properties of the projection under the
Euclidean norm which will be required in our following analysis.
Definition 2.1. Let C be a nonempty closed convex subset in a real Hilbert
space H. We denote the projection on C by P rC (·) with images
P rC (x) = {y ∈ C | y − x = min v − x } ∀x ∈ H.
v∈C

The function ϕ : C → H is said to be
(i) γ-strongly monotone on C if for any x, y ∈ C, we have
ϕ(x) − ϕ(y), x − y ≥ γ x − y 2 ,
(ii) monotone on C if for any x, y ∈ C, we have
ϕ(x) − ϕ(y), x − y ≥ 0,
(iii) Lipschitz on C with constant L > 0 (shortly L-Lipschitz) if for any x, y ∈
C, we have
ϕ(x) − ϕ(y) ≤ L x − y .
If ϕ : C → C and L = 1 then ϕ is called nonexpansive on C.
The projection P rC (·) has the following basic properties:
(P roj1 ) P rC (x) − P rC (y) ≤ x − y ∀x, y ∈ H.
(P roj2 ) P rC (x) − P rC (y) 2 ≤ P rC (x) − P rC (y), x − y ∀x, y ∈ H.
(P roj3 ) x − P rC (x), y − P rC (y) ≤ 0 ∀y ∈ C, x ∈ H.
(P roj4 ) P rC (x) − y 2 ≤ x − y 2 − P rC (x) − x 2 ∀y ∈ C, x ∈ H.
(P roj5 ) P rC (x) − P rC (y) 2 ≤ x − y 2 − P rC (x) − x + y − P rC (y) 2 ∀x, y ∈ H.
Now we are in a position to propose a new extragradient-type algorithm for
(BV I).
Algorithm 2.2. Initialization. Choose k = 0, x0 ∈ H, 0 < λ ≤
sequences { k }, {βk }, {γk }, {δk }, {λk }, {αk } and {¯k } such that

{αk } ⊂ [m, n] for some m, n ∈ (0, 1), λk ≤ L12 ∀k ≥ 0,







¯k < ∞, 0 < lim inf βk < lim sup βk < 1,
lim δk = 0,
k→∞
k→∞
k→∞
k=0





 k + βk + γk = 1 ∀k ≥ 0, lim k = 0,
k = ∞.
k→∞

k=0


,
L21

positive



98

PHAM NGOC ANH

Step 1. If xk ∈ Sol(BV I), then stop. Otherwise compute y k = P rC xk −
λk G(xk ) and z k = P rC xk − λk G(y k ) .
Step 2. Inner iterations j = 0, 1, · · · . Compute

k,0
k
k

x = z − λF (z ),
y k,j = P rC xk,j − δj G(xk,j ) ,

 k,j+1
x
= j xk,0 + βj xk,j + γj P rC xk,j − δj G(y k,j ) .

Find hk such that hk − lim xk,j ≤ ¯k and set xk+1 = αk xk +(1−αk )hk .
j→∞

Step 3. Increase k by 1 and go to Step 1.
Remark 2.3. If xk+1 = αk xk +(1−αk )hk is substituted for xk+1 = α
¯ k u+ β¯k xk +
k
n
γ¯k h , where α
¯ k , β¯k , γ¯k ∈ [0, 1] for all k ≥ 0, u ∈ R and α
¯ k + β¯k + γ¯k = 1, then

n
Algorithm 2.2 becomes Algorithm 2.1 in R proposed by Anh et al. in [3]. Using
this fixed point technique allows us to extend the result from a finite-dimensional
space Rn to a real Hilbert space H.
Remark 2.4. Suppose that αk = δk = λ = 0. Then we can choose hk = z k
and it is easy to see that the sequence {xk } in Algorithm 2.2 is the well-known
extragradient iteration sequence which was first introduced by Korpelevich in
[11].
3. Convergence results
Let C be a nonempty closed convex subset of H, G : H → H be monotone
and L2 -Lipschitz on C, and S : C → C be a nonexpansive mapping such that
Sol(G, C) ∩ F ix(S) = ∅, where F ix(S) := {x ∈ C | S(x) = x} is the set of fixed
points of S. Let the sequences {xk } and {y k } be generated by

0

x ∈ H,
y k = P rC xk − δk G(xk ) ,

 k+1
x
= k x0 + βk xk + γk SP rC xk − δk G(y k ) ∀k ≥ 0,

where { k }, {βk }, {γk } and {δk } satisfy the following conditions:

δk > 0 ∀k ≥ 0, lim δk = 0,


k→∞





+
β
+
γ
=
1 ∀k ≥ 0,
 k
k
k



k = ∞, lim k = 0,

k→∞

k=1



0 < lim inf βk < lim sup βk < 1.
k→∞

k→∞

Under these conditions, Yao et al. showed that the sequences {xk } and {y k }
converge strongly to the same point P rSol(G,C)∩F ix(S) (x0 ) in [18].

Apply these iteration sequences with S being the identity mapping, we have
the following lemma.


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

99

Lemma 3.1. Suppose that the assumptions (A1 ) − (A4 ) hold. Then the sequence
{xk,j } generated by Algorithm 2.2 converges strongly to the point P rSol(G,C) z k −
λF (z k ) as j → ∞. Consequently, we have
hk − P rSol(G,C) z k − λF (z k )

≤ ¯k ∀k ≥ 0.

Lemma 3.2. Let sequences {xk } and {z k } be generated by Algorithm 2.2, G be
L2 -Lipschitz and monotone on C, and x∗ ∈ Sol(G, C). Then, we have
(3.1)

z k − x∗

2

≤ xk − x∗

2

− (1 − λk L2 ) xk − y k

2


− (1 − λk L2 ) y k − z k

2

.

Proof. Let x∗ be a solution to probems V I(G, C), x∗ ∈ C and
G(x∗ ), x − x∗ ≥ 0 ∀x ∈ C.
Then, for each λk > 0, x∗ is a fixed point of mapping T (x) = P rC x − λk G(x)
on C (see [9]), i.e.,
x∗ = P rC x∗ − λk G(x∗ ) .
Substituting x by xk − λk G(y k ) and y by x∗ into (P roj4 ), we get
z k − x∗

2

≤ xk − λk G(y k ) − x∗
= xk − x∗

2

− λ2k G(y k

2

− xk − λk G(y k ) − z k

2


− 2λk G(y k ), xk − x∗ + λ2k G(y k )
2

2

− xk − z k

2

+ 2λk G(y k ), xk − z k

= xk − x∗

2

− xk − z k

2

+ 2λk G(y k ), x∗ − z k

= xk − x∗

2

− xk − z k

2

+ 2λk G(y k ) − G(x∗ ), x∗ − y k


+ 2λk G(x∗ ), x∗ − y k + 2λk G(y k ), y k − z k
(3.2)

≤ xk − x∗

2

− xk − z k

2

+ 2λk G(y k ), y k − z k .

The last inequality holds because y k ∈ C, x∗ ∈ Sol(G, C) and G is monotone on
C.
Substituting x by xk − λk G(xk ) and y by z k into (P roj3 ), we have
xk − λk G(xk ) − y k , z k − y k ≤ 0.


100

PHAM NGOC ANH

Combining this with (3.2) and the Lipchitzian continuity of G on C with constant
L2 , we obtain
z k − x∗

2


≤ xk − x∗

2

− (xk − y k ) + (y k − z k )

2

+ 2λk G(y k ), y k − z k

= xk − x∗

2

− xk − y k

2

− yk − zk

2

− 2 xk − y k , y k − z k

+ 2λk G(y k ), y k − z k
= xk − x∗

2

− xk − y k


2

− yk − zk

2

− 2 xk − λk G(y k ) − y k , y k − z k

= xk − x∗

2

− xk − y k

2

− yk − zk

2

− 2 xk − λk G(xk ) − y k , y k − z k

+ 2λk G(xk ) − G(y k ), z k − y k
≤ xk − x∗

2

− xk − y k


2

− yk − zk

2

+ 2λk G(xk ) − G(y k ), z k − y k

≤ xk − x∗

2

− xk − y k

2

− yk − zk

2

+ 2λk G(xk ) − G(y k ) z k − y k

≤ xk − x∗

2

− xk − y k

2


− yk − zk

2

+ 2λk L2 xk − y k

≤ xk − x∗

2

− xk − y k

2

− yk − zk

2

+ λk L2 xk − y k

≤ xk − x∗

2

− (1 − λk L2 ) xk − y k

2

zk − yk
2


− (1 − λk L2 ) y k − z k

+ zk − yk
2

.

This implies (3.1).
Lemma 3.3. Suppose that Assumptions (A1 ) − (A4 ) hold. Then, the sequence
{xk } generated by Algorithm 2.2 is bounded.
Proof. Suppose that x∗ is a solution to problems (BV I),
F (x∗ ), x − x∗ ≥ 0 ∀x ∈ Sol(G, C),
we have
x∗ = P rSol(G,C) x∗ − λF (x∗ ) .
Then, it follows from (P roj1 ), β-strongly monotonicity and L1 -Lipschitz conti2β
nuity of F , and 0 < λ ≤ L
2 that
1

P rSol(G,C) z k − λF (z k ) − x∗

2

= P rSol(G,C) z k − λF (z k ) − P rSol(G,C) x∗ − λF (x∗ )
≤ z k − λF (z k ) − x∗ + λF (x∗ )
≤ z k − x∗

2


2

− 2λ F (z k ) − F (x∗ ), z k − x∗ + λ2 F (z k ) − F (x∗ )

≤ (1 − 2βλ + λ2 L21 ) z k − x∗
(3.3)

2

≤ z k − x∗ 2 .

2

2

2


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

It follows from λk ≤

1
L2

101

and (3.1) that z k − x∗ ≤ xk − x∗ . Combining this

with (3.3) and Assumptions 0 < λ ≤



L1 ,



¯k < +∞, we have
k=0

xk+1 − x∗ = αk xk + (1 − αk )hk − x∗
≤αk xk − x∗ + (1 − αk ) hk − x∗
≤αk xk − x∗ + (1 − αk ) hk − P rSol(G,C) z k − λF (z k )
+ (1 − αk ) P rSol(G,C) z k − λF (z k ) − x∗
≤αk xk − x∗ + (1 − αk )¯k + (1 − αk ) z k − x∗
≤αk xk − x∗ + (1 − αk )¯k + (1 − αk ) xk − x∗
= xk − x∗ + (1 − αk )¯k

(3.4)

< xk − x∗ + ¯k
+∞
0



≤ x −x

+

¯k

k=0

< + ∞.
Therefore, the sequence {xk } is bounded.



Lemma 3.4. (see [9]) Let {ak } and {bk } be two positive real sequences such that


ak+1 ≤ ak + bk ∀k ≥ 0 and

bk < +∞.
k=0

Then there exists lim ak = c.
k→∞

Lemma 3.5. Suppose that Assumptions (A1 ) − (A4 ) hold and the sequences
{xk } and {z k } are generated by Algorithm 2.2. Then, we have
xk+1 − x∗
(3.5)

2

≤ xk − x∗

2

+ 2(1 − αk )¯k z k − x∗ + (1 − αk )¯2k


− (1 − αk )(1 − λk L2 ) xk − y k

2

− (1 − αk )(1 − λk L2 ) y k − z k

Consequently, we have
lim xk − y k = lim y k − z k = lim xk − z k = 0.

k→∞

k→∞

k→∞

Proof. For each k ≥ 0, Lemma 3.1 shows that there exists
lim xk,j = P rSol(G,C) z k − λF (z k ) .

j→∞

2

.


102

PHAM NGOC ANH


Combining this with 0 < λ ≤
k ≥ 0 we have


,
L21

xk+1 − x∗

2

(3.1), Lemma 3.1 and x∗ ∈ Sol(BV I), for

= αk xk + (1 − αk )hk − x∗

2

≤αk xk − x∗

2

+ (1 − αk ) hk − x∗

≤αk xk − x∗

2

+ (1 − αk )

=αk xk − x∗


2

+ (1 − αk )

2
2

P rSol(G,C) z k − λF (z k ) − x∗ + ¯k

× { P rSol(G,C) z k − λF (z k ) − P rSol(G,C) x∗ − λF (x∗ )
≤αk xk − x∗

2

+ (1 − αk )

≤αk xk − x∗

2

+ (1 − αk )

=αk xk − x∗

2

+ (1 − αk ) z k − x∗

1 − 2ηλ + λ2 L21 z k − x∗ + ¯k

z k − x∗ + ¯k

+ ¯k }2
2

2

2

+ 2(1 − αk )¯k z k − x∗ + (1 − αk )¯2k
2

≤αk xk − x∗
×

xk − x∗

= xk − x∗

2

+ 2(1 − αk )¯k z k − x∗ + (1 − αk )¯2k + (1 − αk )
2

− (1 − λk L2 ) xk − y k

2

− (1 − λk L2 ) y k − z k


2

+ 2(1 − αk )¯k z k − x∗ + (1 − αk )¯2k

− (1 − αk )(1 − λk L2 ) xk − y k

2

− (1 − αk )(1 − λk L2 ) y k − z k

2

.

This implies (3.5). It follows from (3.4) that
xk+1 − x∗ ≤ xk − x∗ + ¯k .


¯k < +∞ and Lemma 3.4, there exists

Combining this,
k=0

lim xk − x∗ = c.

(3.6)

k→∞

Hence by (3.5), we have xk − y k → 0 as k → ∞. Since λk ≤

and {αk } ⊂ [m, n] for some m, n ∈ (0, 1), we obtain
(1 − αk )(1 − λk L2 ) z k − y k

2

≤ xk − x∗

2

1
L2 ,

(3.5), (3.6)

+ 2(1 − αk )¯k z k − x∗ + (1 − αk )¯2k

− xk+1 − x∗ 2 ,
and hence z k − y k → 0 as k → ∞. Consequently,
xk − z k ≤ xk − y k + y k − z k ⇒ lim xk − z k = 0.
k→∞


Lemma 3.6. (see [15]) Let H be a real Hilbert space, {αk } be a sequence of
real numbers such that 0 < a ≤ αk ≤ b < 1 for all k ≥ 0, and two sequences


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

103


{xk }, {y k } in H such that

lim sup xk ≤ c,



 k→∞
lim sup y k ≤ c,

k→∞


 lim αk xk + (1 − αk )y k = c.
k→∞

Then, lim xk − y k = 0.
k→∞

Lemma 3.7. (see [17]) Let H be a real Hilbert space and C be a nonempty closed
convex subset of H. Let {xk } be a sequence in H. Suppose that, for all x∗ ∈ C,
xk+1 − x∗ ≤ xk − x∗

∀k ≥ 0.

Then, the sequence {P rC (xk )} converges strongly to some x
¯ ∈ C.
Theorem 3.8. Suppose that Assumptions (A1 ) − (A4 ) hold. Then three sequences {xk }, {y k } and {z k } generated by Algorithm 2.2 converge strongly to a
solution x∗ of problems (BV I). Moreover, we have
x∗ = lim P rSol(G,C) (xk ).
k→∞


Proof. It follows from (3.1), (3.3) and (3.6) that
lim sup hk − x∗ ≤ lim sup{ hk − P rSol(G,C) z k − λF (z k )
k→∞

k→∞

+ P rSol(G,C) z k − λF (z k ) − x∗ }
≤ lim sup{¯k + z k − x∗ }
k→∞

≤ lim sup{¯k + xk − x∗ }
k→∞

(3.7)

= c.

Using xk+1 = αk xk + (1 − αk )hk and {αk } ⊂ [m, n] ⊂ (0, 1), we have
(3.8)

lim αk (xk − x∗ ) + (1 − αk )(hk − x∗ ) = lim xk+1 − x∗ = c.

k→∞

k→∞

Combining Lemma 3.6, (3.7) and (3.8), we have
lim hk − xk = 0.


k→∞

Consequently, we get
(3.9)

lim xk+1 − xk = lim (1 − αk ) hk − xk = 0.

k→∞

k→∞


104

PHAM NGOC ANH

From (P roj1 ), it follows that
P rSol(G,C) y k − λF (y k ) −xk+1
≤ P rSol(G,C) y k − λF (y k ) − P rSol(G,C) z k − λF (z k )
+ P rSol(G,C) z k − λF (z k ) − hk + hk − xk+1
≤(1 + λL1 ) y k − z k + ¯k + hk − xk+1
αk
xk − xk+1 .
=(1 + λL1 ) y k − z k + ¯k +
1 − αk
Then, we have
P r Sol(G,C) xk − λF (xk ) − xk
≤ P rSol(G,C) xk − λF (xk ) − P rSol(G,C) z k − λF (z k )

+ xk+1 − xk


+ P rSol(G,C) y k − λF (y k ) − xk+1
+ P rSol(G,C) y k − λF (y k ) − P rSol(G,C) z k − λF (z k )
≤(1 + λL1 ) xk − z k + (1 + λL1 ) y k − z k + xk+1 − xk
+ P rSol(G,C) y k − λF (y k ) − xk+1
≤(1 + λL1 ) xk − z k + (1 + λL1 ) y k − z k + xk+1 − xk
αk
(1 + λL1 ) y k − z k + ¯k +
xk − xk+1
1 − αk
1
(3.10) ≤(1 + λL1 ) xk − z k + 2(1 + λL1 ) y k − z k + ¯k +
xk − xk+1 .
1 − αk
It follows from (3.9), (3.10) and Lemma 3.5 that
(3.11)

lim P rSol(G,C) xk − λF (xk ) − xk = 0.

k→∞

Lemma 3.3 shows that the sequence {xk } is bounded. Then, there exists M > 0
such that
(3.12)

P rSol(G,C) (xk − λF (xk )) − x∗ ≤ M ∀k ≥ 0.

Since (P roj1 ), F is β-strongly monotone and L1 -Lipschitz continuous, we have
P rSol(G,C) (xk − λF (xk )) − x∗
= P rSol(G,C) (xk − λF (xk )) − P rSol(G,C) (x∗ − λF (x∗ ))

≤ xk − λF (xk ) − (x∗ − λF (x∗ ))
= xk − x∗

2

− 2λ F (xk ) − F (x∗ ), xk − x∗

+ λ2 F (xk ) − F (x∗ )
≤ xk − x∗

2

2

2

− 2λβ xk − x∗

2

+ λ2 L21 xk − x∗ 2 .

2


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

105

Combining this and (3.12), we have

xk − x∗

2

= xk − P rSol(G,C) (xk − λF (xk ))

2

+ x∗ − P rSol(G,C) (xk − λF (xk ))

+ 2 xk − P rSol(G,C) (xk − λF (xk )), P rSol(G,C) (xk − λF (xk )) − x∗
(3.13)

≤ xk − P rSol(G,C) (xk − λF (xk ))

2

+ 2M xk − P rSol(G,C) (xk − λF (xk ))
+ xk − x∗

2

Using this, (3.11) and λ <

− 2λβ xk − x∗

,
L21

λ(2β − λL21 ) xk − x∗


2

+ λ2 L21 xk − x∗ 2 .

we get
2

≤ xk − P rSol(G,C) (xk − λF (xk ))

2

+ 2M xk − P rSol(G,C) (xk − λF (xk ))
→0 as k → ∞.
Thus, the sequence {xk } converges strongly to x∗ ∈ Sol(BV I). Then Lemma
3.5 implies that the sequences {xk }, {y k } and {z k } must converge strongly to the
unique solution x∗ of problems (BV I).
Now, we set
tk = P rSol(G,C) (xk ) and xk → x∗ as k → ∞.
Then, it follows from (P roj3 ) and x∗ ∈ C that
x∗ − tk , tk − xk ≥ 0.
By Lemma 3.7 and (3.1), {tn } converges strongly to some x
¯ ∈ Sol(G, C). Therefore, we have
lim x
¯∗ − tk , tk − xk ≥ 0 ⇒ x∗ − x
¯, x
¯ − x∗ ≥ 0,

k→∞


and x∗ ≡ x
¯. Thus the sequences {xk }, {y k } and {z k } converge strongly to x∗ ,
where
x∗ = lim P rSol(G,C) (xk ).
k→∞

As a direct consequence of Theorem 3.8, we obtain the following corollary.
Corollary 3.9. Let C be a nonempty closed convex subset of H, G : H → H
be monotone and L-Lipschitz continuous. Let {xk } and {y k } be the sequences
generated by

0

x ∈ H,
k
y = P rC xk − λk G(xk ) ,

 k+1
x
= αk xk + (1 − αk )SP rC xk − λk G(y k ) ∀k ≥ 0,

2


106

PHAM NGOC ANH

where {αk } and {δk } satisfy the following conditions:


0 < λk ≤ L1 ∀k ≥ 0,




αk = ∞, lim αk = 0.

k=1

k→∞

Then {xk } and {y k } converge strongly to the same x
¯ ∈ Sol(G, C).
Acknowledgments
The author would like to thank the referees for their useful comments, remarks
and suggestions.
References
[1] P. N. Anh, An interior-quadratic proximal method for solving monotone generalized variational inequalities, East-West J. Math. 10 (2008), 81-100.
[2] P. N. Anh, An interior proximal method for solving pseudomonotone nonlipschitzian multivalued variational inequalities, Nonlinear Anal. Forum 14 (2009), 27-42.
[3] P. N. Anh, J. K. Kim and L. D. Muu, An extragradient algorithm for solving bilevel
variational inequalities, J. Global Optim. 52 (2012), 627-639.
[4] P. N. Anh and T. Kuno, A cutting hyperplane method for generalized monotone nonlipschitzian multivalued variational inequalities, in: Modeling, Simulation and Optimization
of Complex Processes, Eds: H. G. Bock, H. X. Phu, R. Rannacher, and J. P. Schloder,
Springer, 2012.
[5] P. N. Anh, L. D. Muu and J. J. Strodiot, Generalized projection method for non-Lipschitz
multivalued monotone variational inequalities, Acta Math. Vietnam. 34 (2009), 67-79.
[6] P. N. Anh, L. D. Muu, V. H. Nguyen and J. J. Strodiot, Using the Banach contraction
principle to implement the proximal point method for multivalued monotone variational
inequalities, J. Optim. Theory Appl. 124 (2005), 285-306.
[7] T. Q. Bao and P. Q. Khanh, A projection-type algorithm for pseudomonotone nonlipschitzian multivalued variational inequalities, Generalized convexity, generalized monotonicity and applications.Nonconvex Optim. Appl. 77 Springer, New York, 2005, 113-129.

[8] T. Q. Bao and P. Q. Khanh, Some algorithms for solving mixed variational inequalities,
Acta Math. Vietnam. 31 (2006), 83-103.
[9] F. Facchinei and J. S. Pang, Finite-dimensional variational inequalities and complementarity problems, Springer-Verlag, NewYork, 2003.
[10] F. Giannessi, A. Maugeri and P. M. Pardalos, Equilibrium problems: Nonsmooth optimization and variational inequality models, Kluwer, 2004.
[11] G. M. Korpelevich, Extragradient method for finding saddle points and other problems,
Ekonomika i Matematicheskie Metody 12 (1976), 747-756.
[12] Z. Q. Luo, J. S. Pang and D. Ralph, Mathematical programs with equilibrum constraints,
Cambridge University Press, Cambridge, 1996.
[13] A. Moudafi, Proximal methods for a class of bilevel monotone equilibrum programs, J.
Global Optim. 47 (2010), 287-292.
[14] N. Nadezhkina and W. Takahashi, Weak convergence theorem by an extragradient method
for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl. 128 (2006),
191-201.
[15] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, Bull. Austral. Math. Soc. 149 (1991), 153-159.
[16] M. Solodov, An explicit descent method for bilevel convex optimization, J. Convex Anal.
14 (2007), 227-237.


A NEW EXTRAGRADIENT ITERATION ALGORITHM FOR BVI

107

[17] W. Takahashi and M. Toyoda, Weak convergence theorems for nonexpansive mappings and
monotone mappings, J. Theory Appl. 118 (2003), 417-428.
[18] Y. Yao, Y. C. Liou, and J. C. Yao, An extragradient method for fixed point probelms
and variational inequality programs, J. Inequal. Appl., (2007), Article ID 38752, 12 pages,
doi:10.1155/2007/38752.
[19] L. C. Zeng and J. C. Yao, Strong convergence theorem by an extragradient method for
fixed point problems and variational inequality problems, Taiwanese J. Math. 10 (2006),
1293-1303.

Department of Scientific Fundamentals
Posts and Telecommunications Institute of Technology, Hanoi, Vietnam.
E-mail address:



×