Tải bản đầy đủ (.pdf) (109 trang)

Mathematical methods for solving equilibrium, variational inequality and fixed point problems

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (701.5 KB, 109 trang )

VIETNAM NATIONAL UNIVERSITY-HO CHI MINH CITY
UNIVERSITY OF SCIENCES

PHAN TU VUONG

MATHEMATICAL METHODS FOR
SOLVING EQUILIBRIUM,
VARIATIONAL INEQUALITY
AND FIXED POINT PROBLEMS

PhD Thesis in Mathematics

Ho Chi Minh City - 2014


VIETNAM NATIONAL UNIVERSITY-HO CHI MINH CITY
UNIVERSITY OF SCIENCES

PHAN TU VUONG

MATHEMATICAL METHODS FOR SOLVING
EQUILIBRIUM, VARIATIONAL INEQUALITY
AND FIXED POINT PROBLEMS

Speciality: Mathematical Optimization
Code: 62 46 20 01

Reviewer 1: Prof. NGUYEN XUAN TAN
Reviewer 2: Assoc. Prof. NGUYEN DINH PHU
Reviewer 3: Assoc. Prof. LAM QUOC ANH


Anonymous Reviewer 1: Prof. NGUYEN XUAN TAN
Anonymous Reviewer 2: Dr. DUONG DANG XUAN THANH

Supervisor 1: Assoc. Prof. NGUYEN DINH
Supervisor 2: Prof. VAN HIEN NGUYEN

Ho Chi Minh City - 2014


DECLARATION

I hereby declare that the work contained in this thesis has never previously been submitted for a degree, diploma or other qualifications in any University or Institution and
that, to the best of my knowledge and belief, the thesis contains no material previously
published or written by another person except when due reference is made in the thesis
itself.

Ph.D. Student

Phan Tu Vuong


Acknowledgements
There have been many people who have helped me through my graduate studies.
In the next few lines, I would like to point out a few of these people to whom I am
especially in debt.
First and foremost, I would like to express my deepest appreciation to my supervisors, Associate Professor Nguyen Dinh (VNU-HCM, International University) and Professor Van Hien Nguyen (Institute for Computational Science and Technology (ICST)
and University of Namur, Belgium).
The completion of the academic research work that led to the results published
in this thesis would have not been possible without the constant encouragement, support, and advice received from Professor Van Hien Nguyen and Professor Jean Jacques
Strodiot (both at ICST and University of Namur) and as such, my gratitude goes to

them.
I would like also to thank all members of the dissertation committee and two anonymous reviewers for their useful comments and suggestions.
This thesis presents the results of the research carried out at ICST during the period
March 2011 - November 2013. This research work was funded by the Department
of Science and Technology at Ho Chi Minh City. Support provided by the ICST is
gratefully acknowledged.
I am grateful to my former advisor, Associate Professor Nguyen Bich Huy (University of Pedagogy) who introduced and helped me to start my graduate studies.
I would also like to thank my friends at ICST and University of Technical Education
Ho Chi Minh City for their kind help.
Finally, to my family, I owe much more than a few words can capture. I thank
them for all the love and support through all the time.

Ho Chi Minh City, May 2014
Phan Tu Vuong

2


Contents
1 Introduction

6

2 Preliminaries
2.1 Elements of convex analysis . . . . . . . . . . . . . . . . .
2.2 The projection operator and useful lemmas . . . . . . . . .
2.3 Fixed point problems . . . . . . . . . . . . . . . . . . . . .
2.4 Variational inequalities . . . . . . . . . . . . . . . . . . . .
2.5 Equilibrium problems . . . . . . . . . . . . . . . . . . . . .
2.5.1 Some particular equilibrium problems . . . . . . . .

2.5.2 Solution methods for solving equilibrium problems .
2.6 Previous works . . . . . . . . . . . . . . . . . . . . . . . .
2.6.1 The hybrid projection method . . . . . . . . . . . .
2.6.2 The shrinking projection method . . . . . . . . . .
2.6.3 The viscosity approximation method . . . . . . . .
2.6.4 The extragradient method . . . . . . . . . . . . . .
3 Hybrid Projection Extragradient Methods
3.1 A hybrid extragradient algorithm . . . . . . .
3.2 Extragradient algorithms with linesearches . .
3.3 Shrinking projection methods . . . . . . . . .
3.4 The particular case of variational inequalities .
3.5 Numerical illustrations . . . . . . . . . . . . .
4 Extragradient Viscosity Methods
4.1 An extragradient viscosity algorithm . . . . .
4.2 A linesearch extragradient viscosity algorithm
4.3 Applications to variational inequality problem
4.4 Numerical illustrations . . . . . . . . . . . . .

3

.
.
.
.
.

.
.
.
.


.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.

.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.

.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.


.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.


.
.
.
.
.
.
.
.
.
.
.
.

9
9
12
14
16
17
19
21
23
24
24
25
25

.
.

.
.
.

27
27
34
44
50
53

.
.
.
.

57
57
64
73
74


5 A Mathematical Analysis of Subgradient
5.1 Previous works and motivation . . . . .
5.2 A general algorithm . . . . . . . . . . . .
5.3 Two projected subgradient algorithms .
5.4 Some interesting special cases . . . . . .
6 Conclusions and Future Work


Viscosity Methods
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

78
78
80
90

94
95

4


Basic Notation and Terminology

H: a real Hilbert space
·, · : the inner product of the space
· : the norm of the space
domf : the domain of a function f
∂f : the subdifferential of a convex function f
∂C: the boundary of a set C
F ix(S): the set of fixed points of operator S
V I(F, C): the variational inequality problem whose objective operator is F and whose
feasible set is C
SolV I(F, C): the solution set of V I(F, C)
EP (f, C): the equilibrium problem whose objective function is f and whose feasible
set is C
SolEP (f, C): the solution set of EP (f, C)

5


Chapter 1
Introduction
Let H be a real Hilbert space with inner product ·, · and norm · , respectively.
Consider a nonempty closed and convex set C ⊂ H and a bifunction f : C × C → R
such that f (x, x) = 0 for all x ∈ C. The equilibrium problem, denoted by EP (f, C),

consists of finding x∗ ∈ C such that
f (x∗ , y) ≥ 0 for all y ∈ C.
The solution set of EP (f, C) will be denoted by SolEP (f, C). To the best of our
knowledge, the term “equilibrium problem” was coined in [16] (see also [66]), but the
problem itself was studied by Ky Fan in [35] (for historical comments see [36]).
Equilibrium problems have been extensively studied in recent years (see, for example, [13, 15, 16, 28, 32, 45, 51, 52, 57, 60, 61, 63, 65, 67, 74, 75, 81, 82, 85, 89, 97] and the
references therein). It is well known that they include, as particular cases, scalar and
vector optimization problems, saddle-point problems, variational inequalities (monotone or otherwise), Nash equilibrium problems, complementarity problems, fixed point
problems, and other problems of interest in many applications (see, for instance, the
recent books [51, 40]).
Let F : H → H be a given mapping. If f (x, y) := F x, y − x , for all x, y ∈ C,
then each solution x∗ ∈ C of the equilibrium problem EP (f, C) is a solution of the
variational inequality
F x∗ , y − x∗ ≥ 0 for all y ∈ C,
and vice versa.
Variational inequalities have shown to be important mathematical models in the
study of many real problems, in particular in network equilibrium models ranging from
spatial price equilibrium problems and imperfect competitive oligopolistic market equilibrium problems to general financial or traffic equilibrium problems (see, for example,
the recent monographs [70, 34]).
6


A point x∗ ∈ C is called a fixed point of a mapping S : H → H if Sx∗ = x∗ . The
set of fixed points of S is the set F ix(S) := {x ∈ H : Sx = x}. The computation
of fixed points is important in the study of many problems including inverse problems
in science and engineering (see, for example, [11]). Construction of fixed points of
nonexpansive mappings (i.e., Sx − Sy ≤ x − y for all x, y ∈ H) is an important
subject in nonlinear operator theory and its applications [21]; in particular, in image
recovery and signal processing [20].
In 2007, S. Takahashi and W. Takahashi [93] introduced an iterative scheme by the

viscosity approximation method for finding a common element of the solution set of
EP (f, C) and the set of fixed points of a nonexpansive mapping S in a real Hilbert space
and obtained, under certain appropriate conditions, a strong convergence theorem for
such scheme.
Motivated and inspired by the ongoing results of obtaining strong convergence theorems for approximation of common elements of equilibrium problems and fixed point
problems [80, 25, 94, 48, 49, 87, 31, 46, 3], we introduce, as a first contribution of our
thesis, a new and different algorithm from the existing algorithms in the literature.
Indeed, the method used in most papers for solving the equilibrium problem EP (f, C)
is the proximal point method [22, 53, 54, 81, 86, 92]. This method consists in solving
at each iteration a nonlinear variational inequality problem which seems not easy to
solve [63, 67]. In this thesis, we propose instead, to use an extragradient method with
or without the incorporation of a linesearch. At each iteration, one or two convex minimization problems must be solved depending on the presence or not of a linesearch.
Working in a Hilbert space, these methods usually generate sequences of iterates that
only converge weakly to a solution of the problem while it is well known that strongly
convergent algorithms are of fundamental importance for solving problems in infinite
dimensional spaces [10]. For obtaining the strong convergence from the weak convergence without additional assumptions on the data of the problem we propose to use
the hybrid projection method [47, 68, 69, 71]. In this method, the solution set of the
problem is outer approximated by a sequence of polyhedral subsets and the sequence
of iterates converges to the orthogonal projection of a given point onto the solution
set. We report some preliminary numerical tests to show the behavior of the proposed
algorithms. Chapter 3 will provide a detailed description of our first contribution to
the thesis.
Chapter 4 and Chapter 5, which constitute the second and third contribution to our
thesis work, will consider a class of ‘hierarchical optimization’: a variational inequality
problem constrained by a fixed point problem and/or an equilibrium problem. This
class of problems (also known as ‘bilevel problems’) has been studied extensively in the
literature (see, for example, [2, 32] and the references cited therein). Such hierarchical
7



and equilibrium models are of interest in energy markets, and particularly in electricity
and natural gas markets [40].
More precisely, our aim in the second contribution is to study new numerical algorithms for finding a solution of a variational inequality problem whose constraint set
is the set of common elements of the set of fixed points of a mapping and the set of
solutions of an equilibrium problem in a real Hilbert space. The strategy is to use the
extragradient methods with or without linesearch instead of the proximal methods to
solve equilibrium problems. To obtain the strong convergence of the iterates generated
by these algorithms, a regularization procedure is added (the so-called viscosity approximation method; see, for example, [1, 55, 57, 64, 93]) after an extragradient method.
Preliminary numerical tests are presented to show the behavior of the extragradient
methods when a viscosity step is added. For more details, please see Chapter 4.
The third contribution of this thesis, Chapter 5, contains some numerical methods
for finding a solution of a variational inequality problem over the solution set of an
equilibrium problem defined on a subset C of a real Hilbert space. The strategy used
in this chapter is to combine viscosity-type approximations with projected subgradient
techniques to obtain the strong convergence of the iterates to a solution of the problem.
First a general scheme is considered, and afterwards two practical realizations are
studied depending on the characteristics of the feasible set C. When this set is simple,
the projections onto C can be easily computed and all the iterates remain in C. On
the other hand, when C is described by convex inequalities, the projections onto C are
replaced by projections onto half-spaces containing C with the consequence that most
iterates are outside the feasible set C.
This strategy has been recently used in [56], and partly in [12, 13, 85], for finding a
solution of a variational inequality problem over the solution set of another variational
inequality problem defined on C. Here we develop a similar approach but for equilibrium constraints instead of variational inequality constraints. For more details, please
see Chapter 5.
The results presented in this dissertation have been published in
• Journal of Optimization Theory and Applications (SCI) and Vietnam Journal of
Mathematics for Chapter 3 ([98, 90]);
• Optimization (SCIE) for Chapter 4 ([99]);
• Journal of Global Optimization (SCI) for Chapter 5 ([100]).


8


Chapter 2
Preliminaries
In this chapter, we recall some definitions and fundamental results related to the
theory of convex analysis and nonlinear mappings in Hilbert spaces. We focus mainly on
the background material needed to approach our work, specially the existing results for
fixed point problems, variational inequalities and equilibrium problems. The interested
reader can find more comprehensive informations in these fields, for example, in [33,
34, 51, 83, 109]. Throughout this work, H denotes a real Hilbert space equipped with
the scalar product ·, · and the associated norm · .

2.1

Elements of convex analysis

Let f : H → R ∪ {+∞} be a real function. The effective domain of f is the set
domf = {x ∈ H : f (x) < +∞}.
The function f is said to be proper if its effective domain is nonempty. We say that f
is convex if, for any x, y ∈ H and any λ ∈ [0, 1],
f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y).
If this inequality is strict whenever x, y are different, the function f is strictly convex.
Moreover, f is said to be strongly convex on H if there exists a constant α > 0 such
that, for any x, y ∈ H and any λ ∈ [0, 1],
1
f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) − αλ(1 − λ) x − y 2 .
2
A function f is said to be lower semi-continuous on H if, for each x ∈ H,

xn → x as n → ∞

=⇒
9

lim inf f (xn ) ≥ f (x).
n→∞


When we consider the weak topology in H, the corresponding notion is the weak lower
semi-continuity. Obviously, any weakly lower semi-continuous function is lower semicontinuous. The converse is not true in general, but we have the following valuable
property:
Proposition 2.1.1. ([33], Chap. I, Corollary 2.2)
Any convex function f : H → R ∪ {+∞} is weakly lower semi-continuous if and only
if it is lower semi-continuous.
We now recall the concept of Gˆateaux-differentiability.
Definition 2.1.1. Let f : H → R∪{+∞} be a real function. The directional derivative
of f at x in the direction d, denoted by f (x, d), is the limit as λ → 0+ , if it exists, of
the quotient
f (x + λd) − f (x)
.
(2.1)
λ
If there exists s ∈ H such that f (x, d) = d, s for all d ∈ H, then we say that f is
Gˆateaux-differentiable (or G-differentiable) at x, we call s the Gˆateaux-derivative (or
G-derivative) of f at x, and we denote it by ∇f (x).
The uniqueness of the G-derivative follows directly. It is characterized by
lim+

λ→0


f (x + λd) − f (x)
= d, ∇f (x)
λ

∀d ∈ H.

If f is convex, the quotient (2.1) is a non-decreasing function of λ such that the limit
always exists.
We next introduce the notion of subgradient of a convex function which is a generalization of Gˆateaux differentiability.
Definition 2.1.2. Let f : H → R ∪ {+∞} be a real convex function. An element
s ∈ H is called a subgradient of f at x ∈ H if f (x) ∈ R and
f (y) ≥ f (x) + s, y − x

∀y ∈ H.

The set of all subgradients of f at x is called the subdifferential of f at x and is denoted
by ∂f (x). If no subgradient exists at x, we say that f is not subdifferentiable at x, and
we set ∂f (x) = ∅.
The following proposition gives basic properties of the subdifferential of a lower
semi-continuous and convex function.

10


Proposition 2.1.2. ([8], Chap. 4, Sect. 3, Theorem 17)
Let f : H → R ∪ {+∞} be a proper, convex, lower semi-continuous function. Then f
is subdifferentiable on int(domf ) and, for any x ∈ int(domf ), ∂f (x) is bounded, closed
and convex.
The following proposition shows that the subdifferential generalizes the G-derivative.

Proposition 2.1.3. ([33], Chap. I, Proposition 5.3)
Let f : H → R ∪ {+∞} be a convex function. If f is G-differentiable at x ∈ H, then
f is subdifferentiable at x and ∂f (x) = {∇f (x)}. Conversely, if at a point x ∈ H, f
is continuous, finite and has only one subgradient, then f is G-differentiable at x and
∂f (x) = {∇f (x)}.
To end this section, we give some results that are generalizations, in infinite dimensional spaces, of Theorem 10.8 and Corollary 10.8.1 in [83]. They will be used in next
chapters.
Proposition 2.1.4. Let H be a real Hilbert space. Consider {ϕn } a sequence of continuous convex functions from H into R, and ϕ a continuous convex function from H
into R. If {ϕn } converges pointwise to ϕ on H, then there exists η > 0 such that {ϕn }
converges uniformly to ϕ on ηB where B denotes the unit closed ball in H.

Proof. Since, by assumption, the sequence {ϕn (x)} is bounded for every x ∈ H, it
follows from Theorem 2.2.22 in [109] that the sequence {ϕn } is locally equi-Lipschitz
on H, and thus also locally equi-bounded on H. So, there exist δ > 0 and M > 0 such
that, for every x ∈ δB and n ∈ N, we have ϕn (x) ≤ M .
Now, let S be the collection of singletons in H. Since the sequence {ϕn } converges
pointwise to ϕ, it also S-converges to ϕ (see Lemma 1.4 in [17]). Then, the assumptions
of Lemma 1.5 in [17] are satisfied with W = {0}, and consequently {ϕn } converges
uniformly to ϕ on ηB for some 0 < η < δ.

Corollary 2.1.1. Let {ϕn } be a sequence of continuous convex functions from H into
R, and let ϕ be a continuous convex function from H into R such that
lim sup ϕn (x) ≤ ϕ(x)

∀x ∈ H.

n→∞

Then, there exists η > 0 such that, for every ε > 0, there exists n0 ∈ N such that
ϕn (x) ≤ ϕ(x) + ε


∀n ≥ n0 , ∀x ∈ ηB

where B denotes the unit closed ball in H.
11


Proof. Let ψn = max{ϕn , ϕ}. Then, the sequence {ψn } of convex continuous functions
converges pointwise to ϕ on H. So, by Proposition 2.1.4, there exists η > 0 such that
{ψn } converges uniformly to ϕ on ηB. Hence, for every ε > 0, there exists n0 ∈ N such
that
|ψn (x) − ϕ(x)| < ε ∀n ≥ n0 , ∀x ∈ ηB.
Since ϕn ≤ ψn and ψn (x) − ϕ(x) ≥ 0, we obtain the desired result
ϕn (x) ≤ ψn (x) < ϕ(x) + ε ∀n ≥ n0 , ∀x ∈ ηB.

2.2

The projection operator and useful lemmas

Let C be a nonempty closed convex subset of a Hilbert space H. For each x ∈ H,
there exists a unique point in C [50], denoted by PC x, such that
x − PC x ≤ x − y

∀y ∈ C.

Some well-known properties of the metric projection PC : H → C are given in the
following theorem.
Theorem 2.2.1. (a) PC (·) is a nonexpansive mapping, i.e., for all x, y ∈ H
PC x − PC y ≤ x − y .
(b) For any x ∈ H and y ∈ C, it holds that x − PC x, y − PC x ≤ 0. Conversely, if

u ∈ C and x − u, y − u ≤ 0 for all y ∈ C, then u = PC x.
Definition 2.2.1. A sequence {xn } ⊂ H is said to be:
(i) strongly convergent to x ∈ H if and only if limn→∞ xn − x = 0,
(ii) weakly convergent to x ∈ H if and only if limn→∞ xn − x, y = 0 for every
y ∈ H.
We denote the weak convergence and the strong convergence of a sequence {xn } to x
in H by xn
x and xn → x, respectively.
The following lemmas are useful in the next chapters.

12


Lemma 2.2.1. ([47], Lemma 2.5) Let K be a nonempty closed convex subset of H.
Let u ∈ H and let {xn } be a sequence in H. If any weak limit point of {xn } belongs to
K and if xn − u ≤ u − PK u for all n ∈ IN , then xn → PK u.
Lemma 2.2.2. ([47], Lemma 2.2) For any t ∈ [0, 1] and for any x, y ∈ H, the following
inequality holds
tx + (1 − t)y

2

=t x

2

+ (1 − t) y

2


− t(1 − t) x − y 2 .

Lemma 2.2.3. ([27], Lemma 3.1) Let {an } and {bn } be two nonnegative sequences
satisfying the following conditions


an+1 ≤ an + bn

∀n ≥ n0

and

bn < +∞
n=0

where n0 is some nonnegative integer. Then limn→∞ an exists.
Lemma 2.2.4. ([55], Lemma 2.1) Let {an } and {bn } be two nonnegative sequences
satisfying the following conditions




an = ∞,
n=0



a2n

< ∞,


n=0

an bn < ∞.
n=0

Then the following two results hold
(i) There exists a subsequence {bnk } of {bn } such that limk→∞ bnk = 0.
(ii) If {an } and {bn } are also such that bn+1 − bn ≤ θan (for some θ > 0), then
limn→∞ bn = 0.

Lemma 2.2.5. ([55], Lemma 3.1) Let {bn } be a sequence of real numbers that does
not decrease at infinity, in the sense that there exists a subsequence {bnj } of {bn } such
that
bnj < bnj +1 for all j ≥ 0.
Also consider the sequence of integers {τ (n)}n≥n0 defined for all n ≥ n0 , by
τ (n) = max{k ≤ n| bk < bk+1 }.
Then {τ (n)}n≥n0 is a nondecreasing sequence verifying
lim τ (n) = ∞,

n→∞

and, for all n ≥ n0 , the following two estimates hold
bτ (n) ≤ bτ (n)+1

and

13

bn ≤ bτ (n)+1 .



2.3

Fixed point problems

Let S : H → H be a mapping, the fixed point problem associated with S is to find
a point x∗ ∈ H such that x∗ = Sx∗ . The fixed point set of S is denoted F ix(S).

Definition 2.3.1. Let C be a subset of H. The mapping S is said to be
(i) nonexpansive on C if
Sx − Sy ≤ x − y

∀x, y ∈ C;

(ii) quasi-nonexpansive on C if F ix(S) = ∅ and
Sx − x∗ ≤ x − x∗

for all (x, x∗ ) ∈ C × F ix(S);

(iii) ξ-strict pseudo-contractive on C if there exists ξ ∈ [0, 1) such that
2

Sx − Sy

≤ x−y

2

+ ξ (I − S)x − (I − S)y


2

for all x, y ∈ C

where I denotes the identity mapping;
(iv) β-demicontractive (or β-quasi-strict pseudo-contractive) on C if F ix(S) = ∅ and
there exists β ∈ [0, 1) such that
Sx − x∗

2

≤ x − x∗

2

+ β x − Sx

2

for all (x, x∗ ) ∈ C × F ix(S);

(v) demiclosed at zero on C if for every sequence {xn } contained in C,
xn

x and

Sxn − xn → 0




Sx = x.

It is well known that if C is a nonempty bounded closed convex subset of H and if S
is nonexpansive, then F ix(S) is a nonempty closed convex subset of C. Moreover we
have
Proposition 2.3.1. [47] Let C be a nonempty closed convex subset of H and let S :
C → C be a mapping.
1) If S is a β-demicontractive, then the fixed point set F ix(S) of S is closed and
convex.
2) If S is a β-strict pseudo-contractive, then the mapping S is a β-demicontractive
and the mapping I − S is demiclosed at zero.
14


Lemma 2.3.1. [55, Remark 4.2] Let β ∈ [0, 1), C be a nonempty closed convex subset
of H and S : C → C be a β-demicontractive mapping such that F ix(S) = ∅. Then
Sw = (1 − w)I + wS is a quasi-nonexpansive mapping over C for every w ∈ [0, 1 − β].
Furthermore
Sw x − x∗

2

≤ x − x∗

2

− w(1 − β − w) Sx − x

2


for all (x, x∗ ) ∈ H × F ix(S).

Many methods used in the literature for solving the fixed point problem are derived
from Mann’s iterative algorithm:
Given x0 ∈ C, compute, for all n ∈ IN ,
xn+1 = αn xn + (1 − αn )Sxn

(2.2)

where the sequence {αn } is in the interval [0, 1]. This method has been extensively investigated for nonexpansive mappings. In particular, it was proven that if the sequence
{αn } is chosen such that ∞
n=0 αn (1 − αn ) = +∞ then the sequence {xn } generated by
(2.2) converges weakly to a point of F ix(S). The main drawback of Mann’s iteration
is that it generates a weakly convergent iterative sequence.
Recently several modifications of Mann’s iteration have been proposed to obtain the
strong convergence of the iterates. A first one is the CQ projection method introduced
by Nakajo and Takahashi [71] and defined as follows:
Given x0 ∈ C, compute for all n ∈ IN ,

yn
= αn xn + (1 − αn )Sxn






 Cn
= {z ∈ C : yn − z ≤ xn − z }

(2.3)


Q
=
{z

C
:
x

z,
x

x

0}
n
n
0
n





xn+1 = PCn ∩Qn x0
where {αn } ⊂ [0, 1].
It was proved that if the sequence {αn } is bounded above away from one, then the
sequence {xn } generated by (2.3) converges strongly to PF ix(S) x0 .

A second modification of Mann’s iterative method is the shrinking projection method
devised by Takahashi, Takeuchi and Kubota [95]. The projection of x0 onto the intersection Cn ∩ Qn used in the CQ projection method is replaced by the projection onto
a closed convex set which is reduced after each iteration and which contains F ix(S).
More precisely, the iteration in the shrinking projection method is the following:
15


Given x0 ∈ C0 := C, compute for all n ∈ IN ,


yn
= αn xn + (1 − αn )Sxn



Cn+1 = {z ∈ Cn : yn − z ≤ xn − z }




xn+1 = PCn+1 x0

(2.4)

where {αn } ⊂ [0, 1]. The strong convergence of the sequence {xn } to PF ix(S) x0 is obtained under the same condition of parameters {αn } as in the CQ projection method.
Another modification of Mann’s iterative method is the viscosity approximation
method [9, 18, 64, 105]. The strong convergence of the iterates is obtained by combining
the nonexpansive mapping S with a contraction mapping g over C. Then the sequence
generated by the viscosity approximation method is given by the iteration
xn+1 = αn g(xn ) + (1 − αn )Sxn

where {αn } ⊂ (0, 1) is a slowly vanishing sequence, i.e., αn → 0 and ∞
n=0 αn = ∞. The
sequence {xn } converges strongly to a fixed point of S. Furthermore, when g(x) = u
for every x ∈ C, the sequence {xn } converges strongly to the projection of u onto the
fixed point set F ix(S).

2.4

Variational inequalities

Let C be a nonempty closed convex subset of H and let F : H → H be a mapping.
The problem of finding x∗ ∈ C such that
F x∗ , y − x∗ ≥ 0 ∀y ∈ C

(2.5)

is called a variational inequality (VI, for short). We denote the problem (2.5) by
V I(F, C) and the corresponding solution set by SolV I(F, C). One often considers VIs
with some additional properties imposed on the mapping F such as continuity, strong
monotonicity, monotonicity or pseudomonotonicity of F . Let us recall some well-known
definitions.
Definition 2.4.1. The mapping F : H → H is said to be
(a) strongly monotone on C if there exists γ > 0 such that
F x − F y, x − y ≥ γ x − y

2

∀x, y ∈ C;

(b) monotone on C if

F x − F y, x − y ≥ 0
16

∀x, y ∈ C;


(c) pseudomonotone on C if
F x, y − x ≥ 0 =⇒ F y, y − x ≥ 0
for all x, y ∈ C;
(d) L-Lipschitz continuous on C (for some L > 0) if
Fx − Fy ≤ L x − y
for all x, y ∈ C.
The implications (a) ⇒ (b) and (b) ⇒ (c) are obvious.
Proposition 2.4.1. [42] Let F : H → H be a L-Lipschitz continuous and γ-strongly
monotone mapping and C is a nonempty closed convex set of H, then the variational
inequality problem V I(F, C) has a unique solution.

2.5

Equilibrium problems

Let C be a nonempty closed convex subset of H and let f be a function from H × H
to R such that f (x, x) = 0 for all x ∈ C. The equilibrium problem (EP, for short)
associated with f and C, in the sense of [35] or [16] (see also [66]), is denoted EP (f, C),
and consists in finding a point x∗ ∈ C such that
f (x∗ , y) ≥ 0

for every y ∈ C.

The set of solutions of EP (f, C) is denoted SolEP (f, C). For an excellent survey on

existence of solutions and methods for solving equilibrium problems, we refer the readers to [15].

Definition 2.5.1. Let C be a subset of a Hilbert space H. A mapping f : H × H → R
is called
(i) γ-strongly monotone with γ > 0 on C if
f (x, y) + f (y, x) ≤ −γ x − y

2

∀x, y ∈ C;

(ii) strictly monotone on C if
f (x, y) + f (y, x) < 0 ∀x, y ∈ C and x = y;

17


(iii) monotone on C if
f (x, y) + f (y, x) ≤ 0 ∀x, y ∈ C;
(iv) pseudomonotone on C if
f (x, y) ≥ 0 ⇒ f (y, x) ≤ 0

∀x, y ∈ C.

It is obvious that (i) ⇒ (ii) ⇒ (iii) ⇒ (iv).

Let ε > 0 and let f : H × H → R be a bifunction satisfying the two properties:
f (x, x) = 0 and f (x, ·) is convex for every x ∈ H. Let also x ∈ H. The ε-subdifferential
of f (x, ·) at x is denoted ∂2ε f (x, x) and defined by
∂2ε f (x, x) = {u ∈ H : f (x, y) − f (x, x) ≥ u, y − x − ε ∀y ∈ H}

= {u ∈ H : f (x, y) ≥ u, y − x − ε ∀y ∈ H}.

(2.6)

Let us mention that ∂2ε f (x, x) with ε > 0 and x ∈ H is an extension or enlargement of
∂2 f (x, x) (the Fenchel subdifferential of f (x, ·) at x with respect to the second variable)
in the sense that ∂2 f (x, x) = ∂20 f (x, x) ⊂ ∂2ε f (x, x). The use of elements in ∂2ε f allows
an extra degree of freedom, comparing with those of ∂2 f .
The following two properties of ∂2 f will be used in the next chapters.
Lemma 2.5.1. Assume that f (x, x) = 0 and f (x, ·) is convex on H for every x ∈ H.
Then
(i) For every x∗ ∈ SolEP (f, C), there exists g¯ ∈ ∂2 f (x∗ , x∗ ) such that
g¯, z − x∗ ≥ 0 for every z ∈ C.
(ii) Let ε > 0. If, in addition, f is monotone on H, then for every x, y ∈ H,
g ∈ ∂2ε f (x, x), and gˆ ∈ ∂2 f (y, y), one has
g − gˆ, y − x ≤ ε.
Proof. (i) Let x∗ ∈ SolEP (f, C). Then x∗ ∈ C and f (x∗ , y) ≥ 0 for every y ∈ C. Hence
x∗ is a minimum of the convex function f (x∗ , ·) over C and thus, by the optimality
condition,
0 ∈ ∂2 f (x∗ , x∗ ) + NC (x∗ )
where NC (x∗ ) denotes the normal cone to C at x∗ . Consequently, using the definition
of the normal cone, we obtain that there exists g¯ ∈ ∂2 f (x∗ , x∗ ) such that g¯, z − x∗ ≥ 0
18


for every z ∈ C.
(ii) Let x, y ∈ H. Since g ∈ ∂2ε f (x, x) and gˆ ∈ ∂2 f (y, y), we have successively
f (x, y) ≥ f (x, x) + g, y − x − ε
f (y, x) ≥ f (y, y) + gˆ, x − y .
Adding these two inequalities and noting that by assumption f (x, x) = f (y, y) = 0, we

get
g − gˆ, y − x ≤ f (x, y) + f (y, x) + ε ≤ ε
where we have used the monotonicity of f to conclude.

2.5.1

Some particular equilibrium problems

In this subsection, we briefly show how some of the main mathematical models can
be formulated as an equilibrium problem.
(a) Optimization problems
Let C be a nonempty closed convex subset of H and let φ : C → R be a convex
mapping. The optimization problem is defined as


 min φ(x),

 subject to x ∈ C.
Let the function f be defined by f (x, y) = φ(y) − φ(x) for every x, y ∈ C. Then
the optimization problem can be rewritten as an equilibrium problem EP (f, C).
(b) Pareto optimization problems
Given m real valued functions φi : Rn → R, a weak Pareto global minimum of the
vector function φ = (φ1 , ..., φm ) over a nonempty closed convex set C ⊂ Rn is any
x∗ ∈ C such that for any y ∈ C there exists an index i such that φi (y)−φi (x∗ ) ≥ 0.
Finding a weak Pareto global minimum amounts to solving EP (f, C) with
f (x, y) = max {φi (y) − φi (x)}.
i=1,...,m

(c) Saddle point problems
Given two closed sets C1 ⊂ Rn1 and C2 ⊂ Rn2 , a saddle point of a function

L : C1 × C2 → R is any x∗ = (x∗1 , x∗2 ) ∈ C1 × C2 such that
L(x∗1 , y2 ) ≤ L(x∗1 , x∗2 ) ≤ L(y1 , x∗2 )
19


holds for any y = (y1 , y2 ) ∈ C1 × C2 . Finding a saddle point of L(·, ·) amounts
to solving EP (f, C) with C = C1 × C2 and
f ((x1 , x2 ), (y1 , y2 )) = L(y1 , x2 ) − L(x1 , y2 ).
(d) Complementarity problems and systems of equations
Given a nonempty closed convex cone C ⊂ Rn and a mapping F : Rn → Rn , the
complementarity problem asks to determine a point x∗ ∈ C such that x∗ ⊥ F x∗
and F x∗ , y ≥ 0 for any y ∈ C, i.e., F x∗ ∈ C ∗ where C ∗ denotes the dual cone
of C. The system of equations F x = 0 is a special complementarity problem with
C = Rn . Solving the complementarity problem amounts to solving EP (f, C)
with
f (x, y) = F x, y − x .
(e) Variational inequality problems
Given a nonempty closed convex set C ⊂ Rn and a mapping F : Rn → Rn , the
Stampacchia variational inequality problem asks to determine a point x∗ ∈ C
such that F x∗ , y − x∗ ≥ 0 for any y ∈ C. Solving this problem amounts to
solving EP (f, C) with
f (x, y) = F x, y − x .
If F : Rn ⇒ Rn is a set-valued mapping with compact values, then finding
x∗ ∈ C and u∗ ∈ F x∗ such that u∗ , y − x∗ ≥ 0 for any y ∈ C amounts to solving
EP (f, C) with
f (x, y) = max u, y − x .
u∈F x

n


n

Given two mappings F, g : R → R and a function h : Rn → (−∞, +∞], another
kind of generalized variational inequality problem asks to find a point x∗ ∈ Rn
such that
F x∗ , y − g(x∗ ) + h(y) − h(g(x∗ )) ≥ 0,
for every y ∈ Rn . Solving this problem amounts to solving EP (f, C) with C = Rn
and
f (x, y) = F x, y − g(x) + h(y) − h(g(x)).
(f ) Fixed point problems
Given a closed set C ⊂ H, a fixed point of a mapping S : C → C is any x∗ such
that Sx∗ = x∗ . Finding a fixed point amounts to solving EP (f, C) with
f (x, y) = x − Sx, y − x .

20


If S : C ⇒ C is a set-valued mapping with compact values, then finding x∗ ∈ C
such that x∗ ∈ F x∗ amounts to solving EP (f, C) with
f (x, y) = max x − u, y − x .
u∈F x

(g) Nash equilibrium problems
Assume there are N players each controlling the variables xi ∈ Rni . Each player
i has a set of possible strategies Ki ⊂ Rni . Denote by x the overall vector of all
variables: x = (x1 , . . . , xN ), and K = K1 × · · · × KN . The aim of player i, given
the other players’ strategies, is to choose an xi ∈ Ki ⊂ Rni that minimizes the
loss function fi : K → R. A solution of the Nash equilibrium problem (NEP) is
a feasible point x∗ ∈ K such that for all i
fi (x∗ ) ≤ fi (x∗ (yi )) ∀yi ∈ Ki

where x∗ (yi ) denotes the vector obtained from x∗ by replacing x∗i with yi . Finding
a Nash equilibrium amounts to solving EP (f, K) with the function f defined as
N

[fi (x(yi )) − fi (x)].

f (x, y) =
i=1

2.5.2

Solution methods for solving equilibrium problems

For our purpose in the next chapters, we recall some well-known solution methods
for equilibrium problems in this subsection. Other interesting solution methods for
equilibrium problems can be found in [15]. To begin with, let us recall the two basic
assumptions for the function f associated with the equilibrium problem EP (f, C):
(i) f (x, x) = 0 for all x ∈ C;
(ii) f (x, ·) is convex, subdifferentiable and lower semicontinuous for all x ∈ C.

(a) Fixed point method
This method has been introduced by Mastroeni [61] for solving strongly monotone
equilibrium problems.
It can be expressed as follows:
Given xn ∈ C, find xn+1 ∈ C solution of the strongly convex optimization problem
min
y∈C

λn f (xn , y) +


1
y − xn
2

2

.

This sequence {xn } converges to the unique solution of EP (f, C) provided that
there exist c1 , c2 > 0 such that
f (x, y) + f (y, z) ≥ f (x, z) − c1 y − x
21

2

− c2 z − y

2

(2.7)


holds for every x, y, z ∈ C and that λn = λ ∈

1

,

1


. The rate of
2c1 2c2
convergence of this method is linear [67], i.e., there exists q ∈ (0, 1) such that
xn+1 − x∗ ≤ q xn − x∗

0, min

∀n ∈ IN

where x∗ is the unique solution of EP (f, C).
It is worth noting that the uniqueness of the solution follows from the strong
monotonicity assumption, which is rather restrictive. Actually, convergence can
be also achieved if f is pseudomonotone and f (x, ·) is Lipschitz continuous on C
uniformly in x, i.e., there exists L > 0 such that
|f (x, y) − f (x, z)| ≤ L y − z

∀x, y, z ∈ C
λ2n < +∞, see [75].

and the sequence {λn } is chosen such that the series
(b) Extragradient methods

Introduced by Korpelevich [52] for finding saddle points, the extragradient method
has been extended for solving variational inequalities. In the corresponding
method, two projections are computed per iteration:
Given xn ∈ C, compute
yn = PC (xn − λn F (xn ))

and


xn+1 = PC (xn − λn F (yn ))

(2.8)

where PC denotes the orthogonal projection onto C. Let us recall that this
method is convergent when F is pseudomonotone and Lipschitz continuous. Recently, the extragradient method has been generalized in [97] for solving equilibrium problems in Rn . In this case, the two steps (2.8) become:
Given xn ∈ C, find successively yn and xn+1 as follows:

 yn
= arg miny∈C {λn f (xn , y) + 21 y − xn 2 }
 x
n+1 = arg miny∈C {λn f (yn , y) +

1
2

y − xn 2 }

where {λn } ⊂ (0, 1]. This method has been proven convergent to some x∗ ∈
SolEP (f, C) when f is pseudomonotone and satisfies a Lipschitz-type property
(2.7). However, this latter condition is strong and difficult to check. So, in
[97], the authors replaced the computation of xn+1 by an Armijo backtracking

22


linesearch, followed by a projection onto a hyperplane. More precisely, given
xn ∈ C, the iterates yn , zn , and xn+1 are calculated as follows:



yn = arg miny∈C {λn f (xn , y) + 12 y − xn 2 }







zn = (1 − γ m )xn + γ m yn , where m is the smallest nonnegative integer such that



f (zn , xn ) − f (zn , yn ) ≥ 2λαn xn − yn 2






 xn+1 = PC (xn − σn gn ) with gn ∈ ∂2 f (zn , xn ) = ∂[f (zn , ·)](xn ) and




σ = f (z , x )/ g 2
n

n

n


n

where the parameters α, γ, αn , and λn are chosen as explained in [97]. Doing so,
the convergence is obtained without the Lipschitz-type condition (2.7).

(c) Proximal Point Method
The basic idea of the proximal point method for EP (f, C) comes from the proximal point method for optimization problems. It consists in finding the next
iteration xn+1 ∈ C such that
f (xn+1 , y) +

1
y − xn+1 , xn+1 − xn ≥ 0 for every y ∈ C
rn

(2.9)

where the sequence {rn } is positive. Under some conditions defined on f , it
is proven that the sequence {xn } is well defined and converges to a solution
of EP (f, C) [65]. However, we mention that it is not an easy work to find
xn+1 satisfying (2.9). Recently, Mordukhovich et al. [63] suggested to solve this
inequality by using descent methods based on gap functions and in [67] another
subproblem must be solved to obtain xn+1 satisfying (2.9).

2.6

Previous works

For finding a common solution of an equilibrium problem and a fixed point problem,
the strategy is to combine a method for solving equilibrium problems with a method

for solving fixed point problems. Most of the methods for solving equilibrium problems
in the literature are based on the proximal point method. These methods require that
the function f is assumed to satisfy the following conditions:
(E1) f (x, x) = 0 for all x ∈ C.
(E2) f is monotone, i.e., f (x, y) + f (y, x) ≤ 0 for all x, y ∈ C.
(E3) lim supt→0+ f (tz + (1 − t)x, y) ≤ f (x, y) for all x, y, z ∈ C.
(E4) f (x, ·) is convex, subdifferentiable and lower semicontinuous for all x ∈ C.
23


×