Tải bản đầy đủ (.pdf) (14 trang)

Báo cáo hóa học: " Research Article Hybrid Steepest Descent Method with Variable Parameters for General Variational Inequalities Yanrong Yu and Rudong Chen" pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (528.55 KB, 14 trang )

Hindawi Publishing Corporation
Journal of Inequalities and Applications
Volume 2007, Article ID 19270, 14 pages
doi:10.1155/2007/19270
Research Article
Hybrid Steepest Descent Method with Variable Parameters for
General Variational Inequalities
Yanrong Yu and Rudong Chen
Received 16 April 2007; Accepted 2 August 2007
Recommended by Yeol Je Cho
We study the strong convergence of a hybrid steepest descent method with variable pa-
rameters for the general variational inequality GVI(F,g,C). Consequently, as an applica-
tion, we obtain some results concerning the constrained generalized pseudoinverse. Our
results extend and improve the result of Yao and Noor (2007) and many others.
Copyright © 2007 Y. Yu and R. Chen. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H.Let
F : H
→ H be an operator such that for some constants k,η>0, F is k-Lipschitzian and
η-strongly monotone on C; that is, F satisfies the fol lowing inequalities:
Fx − Fy≤
kx − y and Fx − Fy,x − y≥ηx − y
2
for all x, y ∈ C, respectively. Recall that T is
nonexpansive if
Tx− Ty≤x − y for all x, y ∈ H.
We consider the following variational inequality problem: find a point u

∈ C such


that
VI(F,C):

F(u

),v − v



0, ∀v ∈ C. (1.1)
Variational inequalities were introduced and studied by Stampacchia [1] in 1964. It is now
well known that a wide class of problems arising in various branches of pure and applied
sciences can be studied in the general and unified framework of variational inequalities.
Several numerical methods including the projection and its variant for ms, Wiener-Hofp
equations, auxiliary principle, and descent type have been developed for solving the vari-
ational inequalities and related optimization problems. The reader is referred to [1–18]
and the references therein.
2 Journal of Inequalities and Applications
It is well known that when F is strongly monotone on C,theVI(F,C)hasaunique
solution and VI(F,C) is equivalent to the fixed point problem
u

= P
C

u

− μF(u

)


, (1.2)
where μ>0 is an arbitrarily fixed constant and P
C
is the (nearest point) projection from H
onto C.From(1.2), one can suggest a so-called projection method. Using the projection
method, one establishes the equivalence between the variational inequalities and fixed-
point problem. This alternative equivalence has been used to study the existence theory
of the solution and to develop several iterative-type algorithms for solving variational
inequalities. Under certain conditions, projection methods and their variant for ms can
be implemented for solving variational inequalities. However, there are some drawbacks
of this method which rule out its problems in applications, for instance, the projection
method involves the projection P
C
which may not be easily computed due to the com-
plexity of the convex set C.
In order to reduce the complexity probably caused by the projection P
C
,Yamada[11]
introduced the following hybrid steepest descent method for solving the VI( F,C).
Algorithm 1.1. For a given u
0
∈ H, calculate the approximate solution u
n
by the iterative
scheme
u
n+1
= Tu
n

− λ
n+1
μF

Tu
n

, n ≥ 0, (1.3)
where μ
∈ (0,2η/k
2
) and λ
n
∈ (0,1) satisfy the following conditions:
(1) lim
n→∞
λ
n
= 0;
(2)


n=1
λ
n
=∞;
(3) lim
n→∞

n

− λ
n+1
)/λ
2
n+1
= 0.
Yama da [ 11] proved that the approximate solution
{u
n
},obtainedfromAlgorithm 1.1,
converges strongly to the unique solution of the VI(F,C).
Furthermore, Xu and Kim [12] and Zeng et al. [15] considered and studied the con-
vergence of the hybrid steepest descent Algorithm 1.1 and its variant form. For details,
please see [12, 15].
Let F : H
→ H be a nonlinear operator and let g : H → H be a continuous mapping.
Now, we consider the following general variational inequality problem: find a point u


H such that g(u

) ∈ C and
GVI(F,g,C):

F(u

),g(v) − g(u

)



0, ∀v ∈ H, g(v) ∈ C. (1.4)
If g is the i dentity mapping of H, then the GVI(F,g,C) reduces to the VI(F,C).
Although iterative algorithm (1.3) has successfully been applied to finding the unique
solution of the VI(F,C). It is clear that it can not be directly applied to computing solution
of the GVI(F,g,C) due to the presence of g. Therefore, an important problem is how to
apply hybrid steepest descent method to solving GVI(F,g,C). For this purpose, Zeng et al.
[13] introduced a hybrid steepest descent method for solving the GVI(F,g,C)asfollows.
Y. Yu and R. Chen 3
Algorithm 1.2. Let

n
}⊂(0,1), {θ
n
}⊂(0,1],andμ ∈ (0,2η/k
2
).Foragivenu
0
∈ H,
calculate the approximate solution u
n
by the iterative scheme
u
n+1
=

1+θ
n+1

Tu

n
− θ
n+1
g

Tu
n


λ
n+1
μF

Tu
n

, n ≥ 0, (1.5)
where F is η-strong ly monotone and k-Lipschitzian and g is σ-Lipschitzian and δ-strongly
monotone on C.
They also proved that the approximate solution
{u
n
} obtained from (1.5 )converges
strongly to the solution of the GVI(F,g,C) under some assumptions on parameters. Con-
sequently, Yao and Noor [7] present a modified iterative algorithm for approximating
solution of the GVI(F,g,C). But we note that all of the above work has imposed some
additional assumptions on parameters or the iterative sequence
{u
n
}. There is a natural

question that rises: could we relax it?
Our purpose in this paper is to suggest and analyze a hybrid steepest descent method
with variable parameters for solv ing general variational inequalities. It is show n that the
convergence of the proposed method can be proved under some mild conditions on pa-
rameters. We also give an application of the proposed method for solving constrained
generalized pseudoinverse problem.
2. Preliminaries
In the sequel, we will make use of the following results.
Lemma 2.1 [12]. Let
{s
n
} be a sequence of nonnegative numbers satisfying the condition
s
n+1


1 − α
n

s
n
+ α
n
β
n
, n ≥ 0, (2.1)
where

n
}, {β

n
} are sequences of real numbers such that
(i)

n
}⊂[0,1] and


n=0
α
n
=∞,
(ii) limsup
n→∞
β
n
≤ 0 or


n=0
α
n
β
n
is convergent.
Then, lim
n→∞
s
n
= 0.

Lemma 2.2 [19]. Let
{x
n
} and {y
n
} be bounded sequences in a Banach space X and let

n
} be a sequence in [0,1] with 0 < liminf
n→∞
β
n
≤ limsup
n→∞
β
n
< 1.Supposex
n+1
=
(1 − β
n
)y
n
+ β
n
x
n
for all integers n ≥ 0 and limsup
n→∞
(y

n+1
− y
n
−x
n+1
− x
n
) ≤ 0.
Then, lim
n→∞
y
n
− x
n
=0.
Lemma 2.3 [20] (demiclosedness principle). Assume that T is a nonexpansive self-
mapping of a closed convex subse t C of a Hilbert space H.IfT has a fixed point, then I − T is
demiclosed. That is, whenever
{x
n
} is a sequence in C weakly converging to some x ∈ C and
the sequence
{(I − T)x
n
} strongly converges to some y, it follows that (I − T)x = y.Here,I
is the identity operator of H.
The following lemma is an immediate consequence of an inner product.
Lemma 2.4. In a real Hilbert space H, there holds the inequalit y
x + y
2

≤x
2
+2y, x + y, ∀x, y ∈ H. (2.2)
4 Journal of Inequalities and Applications
3. Modified hybrid steepest descent method
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H.Let
F : H
→ H be k-Lipschitzian and η-strongly monotone mapping on C and let g : H → H
be σ-Lipschitzian and δ-strongly monotone mapping on C for some constants σ>0and
δ>1. Assume also that the unique solution u

of the VI(F,C)isafixedpointofg.
Denote by P
C
the projection of H onto C.Namely,foreachx ∈ H, P
C
x is the unique
element in C satisfying


x − P
C
x


=
min


x − y : y ∈ C


. (3.1)
It is known that the projection P
C
is characterized by inequality

x − P
C
x, y − P
C
x


0, ∀y ∈ C. (3.2)
Thus, it follows that the GVI(F,g,C) is e quivalent to the fixed point problem g(u

) =
P
C
(I − μF)g(u

), where μ>0 is an arbitrar y constant.
In this section, assume that T
i
: H → H is a nonexpansive mapping for each 1 ≤ i ≤
N with

N
i
=1

Fix(T
i
) =∅.Letδ
n1

n2
, ,δ
nN
∈ (0,1], n ≥ 1. We define, for each n ≥ 1,
mappings U
n1
,U
n2
, ,U
nN
by
U
n1
= δ
n1
T
1
+

1 − δ
n1

I,
U
n2

= δ
n2
T
2
U
n1
+

1 − δ
n2

I,
.
.
.
U
n,N−1
= δ
n,N−1
T
N−1
U
n,N−2
+

1 − δ
n,N−1

I,
W

n
:= U
nN
= δ
nN
T
N
U
n,N−1
+

1 − δ
nN

I.
(3.3)
SuchamappingW
n
is called the W-mapping generated by T
1
, ,T
N
and δ
n1

n2
, ,δ
nN
.
Nonexpansivity of T

i
yields the nonexpansivity of W
n
.Moreover,[21, Lemma 3.1] shows
that
Fix

W
n

=
F. (3.4)
Such property of W
n
will be crucial in the proof on our result.
Now we suggest the following iterative algorithm for solving GVI(F,g,C).
Algorithm 3.1. Let

n
}⊂[a,b] ⊂ (0,1), {λ
n
}⊂(0,1), {θ
n
}⊂(0,1],and{μ
n
}⊂(0,2η/
k
2
).Foragivenu
0

∈ H, compute the approximate solution {u
n
} by the iterative scheme
u
n+1
= W
n
u
n
− λ
n+1
μ
n+1
F

W
n
u
n

+ α
n+1

u
n
− W
n
u
n


+ θ
n+1

W
n
u
n
− g

W
n
u
n

, n ≥ 0.
(3.5)
At this point, we state and prove our main result.
Theorem 3.2. Assume that 0 <a
≤ α
n
≤ b<1, 0 <μ
n
< 2η/k
2
,andu

∈ Fix(g).Letδ
n1
,
δ

n2
, ,δ
nN
be real numbers such that lim
n→∞

n+1,i
− δ
n,i
) = 0 for all i = 1,2, , N. Assume
Y. Yu and R. Chen 5

n
} and {θ
n
} satisfy the follwoing conditions:
(i) lim
n→∞
λ
n
= 0,


n=1
λ
n
=∞;
(ii) θ
n
∈ (0,2(1 − a)(δ − 1)/(σ

2
− 1)];
(iii) lim
n→∞
θ
n
= 0,lim
n→∞
λ
n

n
= 0.
Then the sequence
{u
n
} generated by Algorithm 3.1 converges strongly to u

whichisasolu-
tion of the GVI(F,g,C).
Proof. Now we div ide our proof into the following steps.
Step 1. First, we prove that
{u
n
} is bounded. From ( 3.5), we have


u
n+1
− u




=



1 − α
n+1
+ θ
n+1

W
n
u
n
+ α
n+1
u
n
− θ
n+1
g

W
n
u
n



λ
n+1
μ
n+1
F

W
n
u
n


u



=



1 − α
n+1

W
n
u
n
− u




θ
n+1

g

W
n
u
n


u


+ α
n+1

u
n
− u


+ θ
n+1

W
n
u
n

− u



λ
n+1
μ
n+1

F

W
n
u
n


F

u


+ λ
n+1
μ
n+1
F(u

)







1 − α
n+1

W
n
u
n
− u



θ
n+1

g

W
n
u
n


u





+


θ
n+1

W
n
u
n
− u



λ
n+1
μ
n+1

F

W
n
u
n


F(u


)



+ α
n+1


u
n
− u



+ λ
n+1
μ
n+1


F(u

)


.
(3.6)
Observe that




1 − α
n+1

W
n
u
n
− u



θ
n+1

g

W
n
u
n


u




2

=

1 − α
n+1

2


W
n
u
n
− u



2
− 2

1 − α
n+1

θ
n+1

g

W
n
u

n


g(u

),W
n
u
n
− u


+ θ
2
n+1


g

W
n
u
n


u



2



1 − α
n+1

2
− 2

1 − α
n+1

δθ
n+1
+ σ
2
θ
2
n+1



W
n
u
n
− u



2



1 − α
n+1

2
− 2

1 − α
n+1

δθ
n+1
+ σ
2
θ
2
n+1



u
n
− u



2
,
[8pt]



θ
n+1

W
n
u
n
− u



λ
n+1
μ
n+1

F

W
n
u
n


F(u

)




2
= θ
2
n+1


W
n
u
n
− u



2
− 2θ
n+1
λ
n+1
μ
n+1

F

W
n
u
n



F(u

),W
n
u
n
− u


+ λ
2
n+1
μ
2
n+1


F

W
n
u
n


F(u

)



2


θ
2
n+1
− 2μ
n+1
ηθ
n+1
λ
n+1
+ μ
2
n+1
k
2
λ
n+1



W
n
u
n
− u




2


θ
2
n+1
− 2μ
n+1
ηθ
n+1
λ
n+1
+ μ
2
n+1
k
2
λ
n+1



u
n
− u




2
= θ
2
n+1

1 −
λ
n+1
θ
n+1
μ
n+1
k

2
+

n+1
μ
n+1
(k − η)
θ
n+1



u
n
− u




2
.
(3.7)
6 Journal of Inequalities and Applications
From (3.7), we have


u
n+1
− u







1 − α
n+1

2
− 2

1 − α
n+1

δθ
n+1

+ σ
2
θ
2
n+1
+ α
n+1



u
n
− u



+ θ
n+1





1 −
λ
n+1
μ
n+1
k
θ

n+1

2
+

n+1
μ
n+1
(k − η)
θ
n+1


u
n
− u



+ λ
n+1
μ
n+1


F(u

)







1 − α
n+1

2
− 2

1 − α
n+1

δθ
n+1
+ σ
2
θ
2
n+1
+ α
n+1



u
n
− u




+ θ
n+1




1 −
λ
n+1
μ
n+1
k
θ
n+1








1+


n+1
μ
n+1
(k − η)

θ
n+1

1 −
λ
n+1
μ
n+1
k
θ
n+1

2
×


u
n
− u



+ λ
n+1
μ
n+1


F(u


)


.
(3.8)
Now we can see that (iii) yields
lim
n→∞

λ
n+1
μ
n+1
k
θ
n+1

η
k

1 −
λ
n+1
μ
n+1
k
θ
n+1

=−

η
k
. (3.9)
Hence, we infer that there exists an integer N
0
≥0 such that for all n≥N
0
,(1/2)λ
n+1
μ
n+1
η<
1, and (λ
n+1
μ
n+1
k/θ
n+1
− η/k)/(1 − λ
n+1
μ
n+1
k/θ
n+1
) < −η/2k. Thus we deduce that for all
n
≥ N
0
,
θ

n+1




1 −
λ
n+1
μ
n+1
k
θ
n+1








1+


n+1
μ
n+1
(k − η)
θ
n+1


1 −
λ
n+1
μ
n+1
k
θ
n+1

2
≤ θ
n+1

1 −
λ
n+1
μ
n+1
k
θ
n+1

1+

λ
n+1
μ
n+1
(k − η)

θ
n+1

1 −
λ
n+1
μ
n+1
k
θ
n+1

2

=
θ
n+1
− λ
n+1
μ
n+1
k +
λ
n+1
μ
n+1
(k − η)
1 − λ
n+1
μ

n+1
k/θ
n+1
= θ
n+1
+
−λ
n+1
μ
n+1
k +

λ
n+1
μ
n+1
k

2

n+1
+ λ
n+1
μ
n+1
k − λ
n+1
μ
n+1
η

1 − λ
n+1
μ
n+1
k/θ
n+1
= θ
n+1
+ λ
n+1
μ
n+1
k

λ
n+1
μ
n+1
k
θ
n+1

η
k

1 −
λ
n+1
μ
n+1

k
θ
n+1


θ
n+1

1
2
λ
n+1
μ
n+1
η.
(3.10)
Y. Yu and R. Chen 7
From (ii) and (iii), we can choose sufficient small θ
n+1
such that
0 <θ
n+1

2

1 − α
n+1

(δ − 1)
σ

2
− 1
=⇒ θ
n+1

σ
2
− 1


2

1 − α
n+1

(δ − 1)
=⇒ σ
2
θ
n+1
− 2

1 − α
n+1

δ ≤ θ
n+1
− 2

1 − α

n+1

=⇒
σ
2
θ
2
n+1
− 2

1 − α
n+1

δθ
n+1
≤ θ
2
n+1
− 2θ
n+1

1 − α
n+1

=⇒

1 − α
n+1

2

− 2

1 − α
n+1

δθ
n+1
+ σ
2
θ
2
n+1


1 − α
n+1

2
− 2θ
n+1

1 − α
n+1

+ θ
2
n+1
=⇒



1 − α
n+1

2
− 2

1 − α
n+1

δθ
n+1
+ σ
2
θ
2
n+1
≤ 1 − α
n+1
− θ
n+1
=⇒


1 − α
n+1

2
− 2

1 − α

n+1

δθ
n+1
+ σ
2
θ
2
n+1
+ α
n+1
+ θ
n+1
≤ 1.
(3.11)
Consequently it follows from (3.6)and(3.8)–(3.11), for all n
≥ N
0
,that


u
n+1
− u





1 −

1
2
λ
n+1
μ
n+1
η



u
n
− u



+ λ
n+1
μ
n+1


F(u

)


. (3.12)
By induction, it easy to see that



u
n
− u




max

max
0≤i≤N
0


u
i
− u



,
2
η


F(u

)




, n ≥ 0. (3.13)
Hence,
{x
n
} is bounded, so are {W
n
u
n
}, {g(u
n
)},and{F(W
n
u
n
)}. We will use M to
denote the possible different constants appearing in the following reasoning.
Define
u
n+1
= α
n+1
u
n
+

1 − α
n+1


y
n
. (3.14)
8 Journal of Inequalities and Applications
From the definition of y
n
,weobtain
y
n+1
− y
n
=
u
n+2
− α
n+2
u
n+1
1 − α
n+2

u
n+1
− α
n+1
u
n
1 − α
n+1
=


1 − α
n+2
+ θ
n+2

W
n+1
u
n+1
− θ
n+2
g

W
n+1
u
n+1

1 − α
n+2

λ
n+2
μ
n+2
F

W
n+1

u
n+1

1 − α
n+2
+
λ
n+1
μ
n+1
F

W
n
u
n

1 − α
n+1


1 − α
n+1
+ θ
n+1

W
n
u
n

− θ
n+1
g

W
n
u
n

1 − α
n+1
= W
n+1
u
n+1
− W
n
u
n
+
θ
n+2
1 − α
n+2
W
n+1
u
n+1

θ

n+1
1 − α
n+1
W
n
u
n
+
θ
n+1
1 − α
n+1
g

W
n
u
n


θ
n+2
1 − α
n+2
g

W
n+1
u
n+1


+
λ
n+1
μ
n+1
1 − α
n+1
F

W
n
u
n


λ
n+2
μ
n+2
1 − α
n+2
F

W
n+1
u
n+1

=

W
n+1
u
n+1
− W
n+1
u
n
+ W
n+1
u
n
− W
n
u
n
+
θ
n+2
1 − α
n+2
W
n+1
u
n+1

θ
n+1
1 − α
n+1

W
n
u
n
+
θ
n+1
1 − α
n+1
g

W
n
u
n


θ
n+2
1 − α
n+2
g

Wn+1u
n+1

+
λ
n+1
μ

n+1
1 − α
n+1
F

W
n
u
n


λ
n+2
μ
n+2
1 − α
n+2
F

W
n+1
u
n+1

.
(3.15)
It follows that


y

n+1
− y
n





u
n+1
− u
n





W
n+1
u
n
− W
n
u
n


+
θ
n+2

1 − α
n+2


W
n+1
u
n+1


+
θ
n+1
1 − α
n+1


W
n
u
n


+
θ
n+1
1 − α
n+1



g

W
n
u
n



+
θ
n+2
1 − α
n+2


g

W
n+1
u
n+1



+
λ
n+1
μ
n+1

1 − α
n+1


F

W
n
u
n



+
λ
n+2
μ
n+2
1 − α
n+2


F

W
n+1
u
n+1




.
(3.16)
From (3.3), since T
i
and U
n,i
for all i = 1,2, ,N are nonexpansive,


W
n+1
u
n
− W
n
u
n


=


δ
n+1,N
T
N
U
n+1,N−1
u

n
+

1 − δ
n+1,N

u
n
− δ
n,N
T
N
U
n,N−1
u
n


1 − δ
n,N

u
n





δ
n+1,N

− δ
n,N




u
n


+


δ
n+1,N
T
N
U
n+1,N−1
u
n
− δ
n,N
T
N
U
n,N−1
u
n






δ
n+1,N
− δ
n,N




u
n


+


δ
n+1,N

T
N
U
n+1,N−1
u
n
− T
N

U
n,N−1
u
n



+


δ
n+1,N
− δ
n,N




T
N
U
n,N−1
u
n



2M



δ
n+1,N
− δ
n,N


+ δ
n+1,N


U
n+1,N−1
u
n
− U
n,N−1
u
n


.
(3.17)
Y. Yu and R. Chen 9
Again, from ( 3.3),


U
n+1,N−1
u
n

− U
n,N−1
u
n


=


δ
n+1,N−1
T
N−1
U
n+1,N−2
u
n
+

1 − δ
n+1,N−1

u
n
− δ
n,N−1
T
N−1
U
n,N−2

u
n


1 − δ
n,N−1

u
n





δ
n+1,N−1
− δ
n,N−1




u
n


+


δ

n+1,N−1
T
N−1
U
n+1,N−2
u
n
− δ
n,N−1
T
N−1
U
n,N−2
u
n





δ
n+1,N−1
− δ
n,N−1




u
n



+ δ
n+1,N−1


T
N−1
U
n+1,N−2
u
n
− T
N−1
U
n,N−2
u
n


+


δ
n+1,N−1
− δ
n,N−1


M

≤ 2M


δ
n+1,N−1
− δ
n,N−1


+ δ
n+1,N−1


U
n+1,N−2
u
n
− U
n,N−2
u
n


≤ 2M


δ
n+1,N−1
− δ
n,N−1



+


U
n+1,N−2
u
n
− U
n,N−2
u
n


.
(3.18)
Therefore, we have


U
n+1,N−1
u
n
− U
n,N−1
u
n




2M


δ
n+1,N−1
− δ
n,N−1


+2M


δ
n+1,N−2
− δ
n,N−2


+


U
n+1,N−3
u
n
− U
n,N−3
u
n




2M
N−1

i=2


δ
n+1,i
− δ
n,i


+


U
n+1,1
u
n
− U
n,1
u
n


=



δ
n+1,1
T
1
u
n
+

1 − δ
n+1,1

u
n
− δ
n,1
T
1
u
n


1 − δ
n,1

u
n


+2M

N−1

i=2


δ
n+1,i
− δ
n,i


,
(3.19)
then


U
n+1,N−1
u
n
− U
n,N−1
u
n





δ

n+1,1
− δ
n,1




u
n


+


δ
n+1,1
T
1
u
n
− δ
n,1
T
1
u
n


+2M
N−1


i=2


δ
n+1,i
− δ
n,i



2M
N−1

i=1


δ
n+1,i
− δ
n,i


.
(3.20)
Substituting (3.20)into(3.17), we have


W
n+1

u
n
− W
n
u
n



2M


δ
n+1,N
− δ
n,N


+2δ
n+1,N
M
N−1

i=1


δ
n+1,i
− δ
n,i




2M
N

i=1


δ
n+1,i
− δ
n,i


.
(3.21)
10 Journal of Inequalities and Applications
Since
{u
n
}, {F(W
n
u
n
)}, {g(W
n
u
n
)} are all bounded, it follows from (3.16), (3.21), (i),

and (iii) that
limsup
n→∞



y
n+1
− y
n





u
n+1
− u
n




0. (3.22)
Hence, by Lemma 2.2,weknow
lim
n→∞


y

n
− u
n


=
0. (3.23)
Consequently,
lim
n→∞


u
n+1
− u
n


=
lim
n→∞

1 − α
n+1



y
n
− u

n


=
0. (3.24)
On the other hand,


u
n
− W
n
u
n





u
n+1
− W
n
u
n


+



u
n+1
− u
n



α
n+1


u
n
− W
n
u
n


+ θ
n+1


W
n
u
n


+ θ

n+1


g

W
n
u
n



+ λ
n+1
μ
n+1


F

W
n
u
n



+



u
n+1
− u
n


,
(3.25)
this together with conditions (i), (iii), and (3.24) implies
lim
n→∞


u
n
− W
n
u
n


=
0. (3.26)
We next show that
limsup
n→∞


F(x


),u
n
− x



0. (3.27)
To prove this, we pick a subsequence
{u
n
i
} of {u
n
} such that
limsup
n→∞


F(x

),u
n
− x


=
lim
i→∞



F(x

),u
n
i
− x


. (3.28)
Without loss of generality, we may further assume that u
n
i
→ z weakly for some z ∈ H.
By Lemma 2.3 and (3.26), we have
z
∈ Fix

W
n

, (3.29)
this imply t hat
z

N

i=1
Fix

T

i

. (3.30)
Since x

solves VI(F,C). Then we obtain
limsup
n→∞


F(x

),u
n
− x


=


F(x

),z − x



0. (3.31)
Y. Yu and R. Chen 11
Finally, we show that u
n

→ u

in norm. From (3.7)–(3.10)andLemma 2.4,wehave


u
n+1
− u



2
=



1 − α
n+1

W
n
u
n
− u



θ
n+1


g

W
n
u
n


u


+ α
n+1

u
n
− u


+ θ
n+1

W
n
u
n
− u




λ
n+1
μ
n+1

F

W
n
u
n


F(u

)

+ λ
n+1
μ
n+1
F(u

)







1 − α
n+1

W
n
u
n
− u



θ
n+1

g

W
n
u
n


u


+ α
n+1

u
n

− u


+ θ
n+1

W
n
u
n
− u



λ
n+1
μ
n+1

F

W
n
u
n


F(u

)




2
+2λ
n+1
μ
n+1


F(u

),u
n+1
− u




1 −
1
2
λ
n+1
μ
n+1
η




u
n
− u



2
+2λ
n+1
μ
n+1


F(u

),u
n+1
− u


.
(3.32)
An application of Lemma 2.1 combined with (3.31) yields that
u
n
− u

→0. This com-
pletes the proof.


4. Application to constrained generalized pseudoinverse
Let K be a nonempty closed convex subset of a real Hilbert space H.LetA be a bounded
linear operator on H. Given an element b
∈ H, consider the minimization problem
min
x∈K
Ax − b
2
. (4.1)
Let S
b
denote the solution set. Then, S
b
is closed and convex. It is known that S
b
is
nonempty if and only if P
A(K)
(b) ∈ A(K). In this case, S
b
has a unique element with
minimum norm; that is, there exists a unique point
x ∈ S
b
satisfying
x
2
= min



x
2
: x ∈ S
b

. (4.2)
Definit ion 4.1 [22]. The K-constrained pseudoinverse of A (symbol

A
K
)isdefinedas
D


A
K

=

b ∈ H : P
A(K)
(b) ∈ A(K)

,

A
K
(b) =

x, b ∈ D



A
k

, (4.3)
where
x ∈ S
b
is the unique solution of (4.2).
Now we recall the K-constrained generalized pseudoinverse of A.
Let θ : H
→ R be a differentiable convex function such that θ

is a k-Lipschitzian and
η-strongly monotone operator for some k>0andη>0. Under these assumptions, there
exists a unique point
x
0
∈ S
b
for b ∈ D(

A
K
)suchthat
θ


x

0

=
min

θ(x):x ∈ S
b

. (4.4)
Definit ion 4.2. The K-constrained generalized pseudoinverse of A associated with
θ (symbol

A
K,θ
)isdefinedasD(

A
K,θ
) = D(

A
K
),

A
K,θ
(b) = x
0
,andb ∈ D(


A
K,θ
), where
12 Journal of Inequalities and Applications
x
0
∈ S
b
is the unique solution to (4.4). Note that if θ(x) =x
2
/2, then the K-constrained
generalized pseudoinverse

A
K,θ
of A associated with θ reduces to the K-constrained pseu-
doinverse

A
K
of A in Definition 4.1.
We now apply the result in Section 3 to construct the K-constrained generalized
pseudoinverse

A
K,θ
of A. First observe that x ∈ K satisfies the minimization problem (4.1)
if and only if there holds the following optimality condition:
A


(Ax − b),x − x≥0,
x
∈ K,whereA

is the adjoint of A. This for each λ>0, is equivalent to,

λA

b +(I − λA

A)x

− 
x, x − x


0, x ∈ K,
P
K

λA

b +(I − λA

A)x

=

x.
(4.5)

Define a mapping T : H
→ H by
Tx
= P
K

A

b +(I − λA

A)x

, x ∈ H. (4.6)
Lemma 4.3 [12]. If λ
∈ (0,2A
−2
) and if b ∈ D(

A
K
), then T is att racting nonexpansive
and Fix(T)
= S
b
.
The proofs of the following Theorems 4.4 and 4.5 are obtained easily; we omit them.
Theorem 4.4. Assume that 0 <μ
n
< 2η/k
2

. Assume {λ
n
} and {θ
n
} satisfy the following
conditions:
(i) lim
n→∞
λ
n
= 0,


n=1
λ
n
=∞;
(ii) θ
n
∈ (0,2(1 − a)(δ − 1)/(σ
2
− 1)];
(iii) lim
n→∞
θ
n
= 0,lim
n→∞
λ
n


n
= 0.
Given an initial guess u
0
∈ H,let{u
n
} bethesequencegeneratedbythealgorithm
u
n+1
= Tu
n
− λ
n+1
μ
n+1
θ


Tu
n

+ α
n+1

u
n
− Tu
n



θ
n+1

g

Tu
n


Tu
n

, n ≥ 0,
(4.7)
where T isgivenin(4.6). Suppose that the unique solution
u
0
of (4.4 )isalsoafixedpointof
g. Then
{u
n
} strongly converges to

A
K,θ
(b).
Theorem 4.5. Assume that 0 <μ
n
< 2η/k

2
. Assume that the restrictions (ii) and (iii) hold
for

n
} and also that the control condition (i) holds for {λ
n
}. Given an initial guess u
0
∈ H,
suppose that the unique solution
u
0
of (4.4)isalsoafixedpointofg. Then the sequence {u
n
}
generated by the algorithm
u
n+1
= W
n
u
n
− λ
n+1
μ
n+1
θ



W
n
u
n

+ α
n+1

u
n
− W
n
u
n


θ
n+1

g

W
n
u
n


W
n
u

n

, n ≥ 0,
(4.8)
converges to

A
K,θ
(b).
Y. Yu and R. Chen 13
References
[1] G. Stampacchia, “Formes bilin
´
eaires coercitives sur les ensembles convexes,” ComptesRendusde
l’Acad
´
emie des Sciences, vol. 258, pp. 4413–4416, 1964.
[2] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Ap-
plications, vol. 88 of Pure and Applied Mathematics, Academic Press, New York, NY, USA, 1980.
[3] M. A. Noor, “General variational inequalities,” Applied Mathematics Letters, vol. 1, no. 2, pp.
119–122, 1988.
[4] M. A. Noor, “Some de velopments in general variational inequalities,” Applied Mathematics and
Computation, vol. 152, no. 1, pp. 199–277, 2004.
[5] M. A. Noor and K. I. Noor, “Self-adaptive projection algorithms for general variational inequal-
ities,” Applied Mathematics and Computation, vol. 151, no. 3, pp. 659–670, 2004.
[6] M. A. Noor, “Wiener-Hopf equations and variational inequalities,” Journal of Optimization The-
ory and Applications, vol. 79, no. 1, pp. 197–206, 1993.
[7] Y. Yao and M. A. Noor, “On modified hybrid steepest-descent methods for general variational
inequalities,” Journal of Mathematical Analysis and Applications, vol. 334, no. 2, pp. 1276–1289,
2007.

[8] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer Series in Com-
putational Physics, Springer, New York, NY, USA, 1984.
[9] P. Jaillet, D. Lamberton, and B. Lapeyre, “Variational inequalities and the pricing of American
options,” Acta Applicandae Mathematicae, vol. 21, no. 3, pp. 263–289, 1990.
[10] E. Zeidler, Nonlinear Functional Analysis and Its Applications. III: Variational Methods and Opti-
mization, Springer, New York, NY, USA, 1985.
[11] I. Yamada, “The hybrid steepest-descent method for variational inequality problems over the
intersection of the fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms
in Feasibility and Optimization and Their Applications (Haifa, 2000), D. Butnariu, Y. Censor, and
S. Reich, Eds., vol. 8, pp. 473–504, North-Holland, Amsterdam, The Netherlands, 2001.
[12] H. K. Xu and T. H. Kim, “Convergence of hybrid steepest-descent methods for variational in-
equalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003.
[13] L. C. Zeng, N. C. Wong, and J. C. Yao, “Convergence of hybrid steepest-descent methods for
generalized variational inequalities,” Acta Mathematica Sinica, vol. 22, no. 1, pp. 1–12, 2006.
[14] M. A. Noor, “Wiener-Hopf equations and variational inequalities,” Journal of Optimization The-
ory and Applications, vol. 79, no. 1, pp. 197–206, 1993.
[15] L. C. Zeng, N. C. Wong, and J. C. Yao, “Convergence analysis of modified hybrid steepest-descent
methods with variable parameters for variational inequalities,” Journal of Optimization Theory
and Applications, vol. 132, no. 1, pp. 51–69, 2007.
[16] Y. Song and R. Chen, “An approximation method for continuous pseudocontractive mappings,”
Journal of Inequalities and Applications, vol. 2006, Article ID 28950, 9 pages, 2006.
[17] R. Chen and H. He, “Viscosity approximation of common fixed points of nonexpansive semi-
groups in Banach space,” Applied Mathematics Letter s, vol. 20, no. 7, pp. 751–757, 2007.
[18] R. Chen and Z. Zhu, “Viscosity approximation fixed points for nonexpansive and m-accretive
operators,” Fixed Point Theory and Applications, vol. 2006, Article ID 81325, 10 pages, 2006.
[19] T. Suzuki, “Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter
nonexpansive semigroups without Bochner integrals,” Journal of Mathematical Analysis and Ap-
plications, vol. 305, no. 1, pp. 227–239, 2005.
[20] K. Geobel and W. A. Kirk, Topics on Metric Fixed Point Theory, Cambridge University Press,
Cambridge, UK, 1990.

14 Journal of Inequalities and Applications
[21] S. Atsushiba and W. Takahashi, “Strong convergence theorems for a finite family of nonexpan-
sive mappings and applications,” Indian Journal of Mathematics, vol. 41, no. 3, pp. 435–453,
1999.
[22] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, vol. 13, Kluwer
Academic Publishers, Dordrecht, The Netherlands, 2000.
Yanrong Yu: Department of Mathematics, Tianjin Polytechnic University, Tianjin 300160, China
Email address:
Rudong Chen: Department of Mathematics, Tianjin Polytechnic University, Tianjin 300160, China
Email address:

×