Tải bản đầy đủ (.pdf) (16 trang)

DSpace at VNU: One step from DC optimization to DC mixed variational inequalities

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (178.5 KB, 16 trang )

This article was downloaded by: [Universidad Autonoma de Barcelona]
On: 23 October 2014, At: 01:29
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Optimization: A Journal of
Mathematical Programming and
Operations Research
Publication details, including instructions for authors and
subscription information:
/>
One step from DC optimization to DC
mixed variational inequalities
a

Le Dung Muu & Tran Dinh Quoc

b

a

Hanoi Institute of Mathematics , VAST, 18 Hoang Quoc Viet road,
Cau Giay district, Hanoi, Vietnam
b

Hanoi University of Science , 334-Nguyen Trai road, Thanh Xuan
district, Hanoi, Vietnam
Published online: 12 Feb 2010.

To cite this article: Le Dung Muu & Tran Dinh Quoc (2010) One step from DC optimization to


DC mixed variational inequalities, Optimization: A Journal of Mathematical Programming and
Operations Research, 59:1, 63-76, DOI: 10.1080/02331930903500282
To link to this article: />
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &


Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

Conditions of access and use can be found at />

Optimization
Vol. 59, No. 1, January 2010, 63–76

One step from DC optimization to DC mixed variational
inequalities
Le Dung Muua* and Tran Dinh Quocb


Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

a

Hanoi Institute of Mathematics, VAST, 18 Hoang Quoc Viet road, Cau Giay district,
Hanoi, Vietnam; bHanoi University of Science, 334-Nguyen Trai road,
Thanh Xuan district, Hanoi, Vietnam
(Received 10 April 2008; final version received 15 March 2009)
We apply the proximal point method to mixed variational inequalities by
using DC decompositions of the cost function. An estimation for the
iterative sequence is given and then applied to prove the convergence of
the obtained sequence to a stationary point. Linear convergence rate is
achieved when the cost function is strongly convex. For nonconvex case,
global algorithms are proposed to search a global equilibrium point.
A Cournot–Nash oligopolistic market model with concave cost function
which motivates our consideration is presented.
Keywords: mixed variational inequality; splitting proximal point method;
DC decomposition; local and global equilibria; Cournot–Nash model

1. Introduction
Let ; 6¼ C & Rn be a closed convex subset, F be a mapping from Rn to C and ’ be
a real-valued (not necessarily convex) function defined on Rn. We consider the
following mixed variational inequality problem (MVIP):
Find xà 2 C such that: Fðxà ÞT ð y À xÃ Þ þ ’ð yÞ À ’ðxÃ Þ ! 0

for all y 2 C:

ð1Þ


We call such a point xà a global solution in contrast to a local solution defined by a
point xà 2 C that satisfies
Fðxà ÞT ð y À xÃ Þ þ ’ð yÞ À ’ðxÃ Þ ! 0,

8y 2 C \ U,

ð2Þ

where U is an open neighbourhood of xà . These points are sometimes referred to local
and global equilibrium points, respectively. Note that when ’ is convex on C, a local
solution is a global one. When ’ is not convex, a local solution may not be a global one.
MVIPs of the form (1) is extensively studied in the literature. The results on
existence, stability and solution-approach when ’ is convex are obtained in many
research papers (see, e.g. [2,5,7,8,10,12,19] and the references quoted therein).
However, when ’ is nonconvex, these results might be no longer preserved.

*Corresponding author. Email:
ISSN 0233–1934 print/ISSN 1029–4945 online
ß 2010 Taylor & Francis
DOI: 10.1080/02331930903500282



Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

64

L.D. Muu and T.D. Quoc

The proximal point method was first introduced by Martinet [9] to variational

inequalities and then extended by Rockafellar [18] to finding a zero point of a
maximal monotone operator. Sun et al. in [20] applied the proximal point method
to DC optimization. It is observed that with a suitable DC decomposition,
DC algorithm introduced by Pham [13] for DC optimization problems becomes a
proximal point algorithm. Recently, DC optimization has been successfully applied
to many practical problems (see [1,14–16] and the references therein).
It is motivated from the well-known Cournot–Nash oligopolistic market model
with an observation that the cost function is not only always linear or convex but
also can be concave when amount of production increases. In this article, we further
apply the proximal point method to mixed variational inequality (1) by using a DC
decomposition of the cost function ’. The DC decomposition of ’ ¼ g À h allows us
to develop a splitting proximal point algorithm to find a stationary point of (1).
This splitting algorithm is useful when g is a convex function such that the resulting
convex subproblem is easy to minimize, since the resolvent is defined by using only
the subgradient of g.
The rest of the article is organized as follows. In Section 2, a splitting
proximal point method for mixed variational inequalities with DC cost function is
proposed. An estimation for the iterative sequence to a stationary point is given.
Then it is used to prove the convergence to a stationary point when the cost
function happens to be convex. The linear convergence rate is achieved when the
cost function is strongly convex. In order to apply to nonconvex cases, global
algorithms are also presented in Section 3 to search a global solution. We close
this by the Cournot–Nash oligopolistic market model which gives an evidence of
our consideration.

2. The proximal point method to DC mixed variational inequality
In this section, firstly, we investigate the properties regarding local and global
solutions to MVIP (1) where ’ is a DC function. Next, we extend the proximal point
method to find a stationary point of this problem. Finally, we prove convergence
results when ’ happens to be convex and F is strongly monotone.


2.1. Condition for equilibrium
As usual, for problem (1), we call C the feasible domain, F the cost operator and ’
the cost function. Since C is closed convex, it is easy to see that any local solution to
(1) is a global one provided that ’ is convex on C. Motivated by this fact, we name
problem (1) a convex mixed variational inequality when ’ is convex in contrast to
nonconvex mixed variational inequalities where the cost function is not convex.
Let us denote
N C :¼ fðx, U Þ: x 2 C, U is a neighbourhood of xg,
and define the mapping S : N C ! 2C and the function m : N C ! R by taking


Optimization

65

È
É
Sðx, U Þ :¼ argmin FðxÞT ð y À xÞ þ ’ð yÞ: y 2 C \ U ,

ð3Þ

È
É
mðx, U Þ :¼ min FðxÞT ð y À xÞ þ ’ð yÞ À ’ðxÞ: y 2 C \ U ,

ð4Þ

respectively. As usual, we refer to m(x, U ) as a local gap function for problem (1).
The following proposition gives necessary and sufficient conditions for a point to

be a solution (local or global) to (1).

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

PROPOSITION 2.1 Suppose that S(x, U ) 6¼ ; for every (x, U ) 2 N C. Then the following
statements are equivalent:
(a) xà is a local solution to (1);
(b) xà 2 C and xà 2 S(xà , U );
(c) xà 2 C and m(xà , U ) ¼ 0.
Proof
Then

We first prove that (a) is equivalent to (b). Suppose xà 2 C and xà 2 S(xà , U ).

0 ¼ Fðxà ÞT ðxà À xÃ Þ þ ’ðxÃ Þ À ’ðxà Þ

Fðxà ÞT ð y À xÃ Þ þ ’ð yÞ À ’ðxà Þ,

8y 2 C \ U:

Hence, xà is a local solution to problem (1). Conversely, if
Fðxà ÞT ð y À xÃ Þ þ ’ð yÞ À ’ðxÃ Þ ! 0,

8y 2 C \ U

ð5Þ

then it is clear that xà 2 S(xà , U ).
It is observed that m(x, U ) 0 for every x 2 C \ U. Thus xà 2 C \ U and
m(xà , U ) ¼ 0 if and only if F(xà )T( y À xà ) þ ’( y) À ’(xà ) ! 0 for all y 2 C \ U which

means that (a) and (c) are equivalent.
g
Clearly, if, in Proposition 2.1, U contains C then xà is a global solution to (1).
Unlike convex mixed variational inequality (including convex optimization and
variational inequalities), a DC mixed variational inequality may not have solution
for all the related functions are continuous and the feasible domain is compact. For
example, if we take C :¼ [À1, 1] & R, F(x) ¼ x and ’(x) ¼ Àx2 (a concave function)
then problem (1) has no solution in this such case. Conditions for existence of
solution of MVIPs lacking convexity have been considered in some recent papers
(see, e.g. [7,12]). However, in those papers, we focus only on solution approaches to
(1).
We denote ÀC the set of proper, lower semicontinuously subdifferentiable convex
functions on C and suppose that both convex functions g and h belong to ÀC.
Moreover, since ’(x) ¼ (g(x) þ g1(x)) À (h(x) þ g1(x)) for arbitrary function g1 2 ÀC,
we may assume that both g and h are strongly convex on C.
By Proposition 2.1, x is a solution to (1) if and only if x solves the following
optimization problem:
É
È
min FðxÞT ð y À xÞ þ gð yÞ À hð yÞ: y 2 C :
Motivated by this fact we can borrow the concept of stationary point from
optimization to MVIP (1).


66

L.D. Muu and T.D. Quoc

Definition 2.2


A point x 2 C is called a stationary point to problem (1) if
0 2 FðxÞ þ @gðxÞ À @hðxÞ þ NC ðxÞ,

where

È
NC ðxÞ :¼ w: wT ð y À xÞ

ð6Þ

É
0, 8y 2 C

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

denotes the (outward) normal cone of C at x 2 C, and @g(x) and @h(x) are the
subgradients at x of g and h, respectively.
Since NC is a cone, for every c40, inclusion (6) is equivalent to
È
É
0 2 c FðxÞ þ @gðxÞ À @hðxÞ þ NC ðxÞ:

ð7Þ

Let us define g1(x) :¼ g(x) þ C(x), where @C(x) is the subgradient of the indicator
function C of C at x. Then, applying the well-known Moreau–Rockafellar theorem,
we have @g1(x) ¼ @g(x) þ @C(x). Thus, by definition, x is a stationary point if and
only if
À
Á

0 2 c FðxÞ þ @gðxÞ À @hðxÞ þ @C ðxÞ,
where c40 is referred to a regularization parameter in the algorithm to be described
below.
From Proposition 2.1, it follows, like in optimization, that every local solution to
the problem (1) is a stationary point. Since both g and C are proper, convex and
closed, so is the function g1. Thus @g1(x) ¼ @g(x) þ NC(x) for every x 2 C.
PROPOSITION 2.3 A necessary and sufficient condition for x to be a stationary point to
the problem (1) is that
À
ÁÀ1 À
Á
ð8Þ
x 2 I þ c@g1
x À cFðxÞ þ c@hðxÞ ,
where c40 and I stands for the identity mapping.
Proof Since g1 is proper closed convex, (I þ @g1)À1 is single valued and defined
everywhere [18]. Hence, x satisfies (8) if and only if x À cF(x) þ cv(x) 2 (I þ c@g1)(x)
for some v(x) 2 @h(x). Since NC(x) is a cone and @g1(x) ¼ @g(x) þ @C(x) ¼
@g(x) þ NC(x), the inclusion x À cF(x) þ cv(x) 2(I þ c@g1)(x) is equivalent to
0 2 F(x) þ @g(x) À @h(x) þ NC(x) which proves (8).
g

2.2. The algorithm and its convergence
If we denote the right-hand side of (8) by Z(x) then inclusion (8) becomes x 2 Z(x).
Proposition 2.3 suggests that finding a stationary point of (1) is indeed to find a fixed
point of the splitting proximal point mapping Z. According to framework of
proximal point methods we can construct an iterative sequence as follows:
. Taking an arbitrary x0 2 C and set k :¼ 0.
. For each k ¼ 0, 1, . . . , for a given xk, we compute xkþ1 by taking:
À

Á
ÁÀ1 À k
xkþ1 ¼ I þ ck @g1
x À ck Fðxk Þ þ ck vðxk Þ ,
where v(xk) 2 @h(xk).

ð9Þ


67

Optimization

If we denote by yk :¼ xk À ckF(xk) þ ckv(xk) then finding xkþ1 is reduced to solving
the strongly convex programming problem:
&
'
1
2
kx À yk k : x 2 C :
ð10Þ
min gðxÞ þ
2ck
Indeed, by the well-known optimality condition for convex programming, xkþ1 is the
optimal solution to the convex problem (10) if and only if

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

0 2 @gðxkþ1 Þ þ


1 kþ1
ðx
À yk Þ þ NC ðxkþ1 Þ:
ck

Note that when F  0 and h  0 the process (9) becomes the well-known proximal
point algorithm for convex programming problems, whereas F  0 it is the proximal
method for DC optimization [14,18,20].
In order to prove the convergence of the proximal point method defined by (9),
we recall the following well-known definitions (see, e.g. [7,17,21]).
n
For any mapping  : C ! 2R :
.  is said to be monotone on C if
ðu À vÞT ðx À yÞ ! 0,

8x, y 2 C,

8u 2 ðxÞ, v 2 ð yÞ;

.  is said to be maximal monotone if its graph is not contained properly in
the graph of another monotone mapping;
.  is said to be strongly monotone with modulus 40, shortly, -strongly
monotone, on C if
ðu À vÞT ðx À yÞ ! kx À yk2 ,

8x, y 2 C,

8u 2 ðxÞ, v 2 ð yÞ;

.  is said to be cocoercive with modulus 40, shortly, -cocoercive, on C if

ðu À vÞT ðx À yÞ ! ku À vk2 ,

8x, y 2 C,

8u 2 ðxÞ, v 2 ð yÞ:

ð11Þ

Clearly, if  is single valued and -cocoercive then it is 1/-Lipschitz.
PROPOSITION 2.4 Suppose that the set of stationary points SÃ of problem (1) is
nonempty, that F is -cocoercive, g is strongly convex on C with modulus 40, and h is
L-Lipschitz differentiable on C. Then, for each xà 2 Sà , we have
2

2

2

k kxk À xà k À
k kxkþ1 À xà k ! ck ð2 À ck ÞkFðxk Þ À Fðxà Þk ,

ð12Þ

where k ¼ 1 þ ckLt,
k ¼ ð1 þ 2ck  À ck LtÞ and t40.
Proof First we note that, since g is proper, closed convex and C is nonempty closed
convex, the mapping (I þ ck@g1)À1 is single valued and defined everywhere for every
ck40 [18]. Thus the sequence {xk} constructed by (9) is well-defined. It follows from
(9) that
xkþ1 ¼ xk À ck Fðxk Þ þ ck vk À ck zkþ1 ,

k

k

kþ1

kþ1

kþ1

kþ1

ð13Þ

where v ¼ rh(x ) and z
2 @g1(x ) ¼ @g(x ) þ NC(x ). For simplicity of
notation, in the remainder of the following expressions, we write F k for F(xk) and
F à for F(xà ). By definition, if xà is a stationary point to MVIP (1)


68

L.D. Muu and T.D. Quoc

then 0 ¼ zà þ F(xà ) À và , where zà 2 @g1(xà ) and và ¼ rh(xà ). The cocoercivity of F on
C implies that
ðF k À F à ÞT ðxk À xÃ Þ À kF k À F à k

0


2

¼ ðF k À F à ÞT ðxk À xà À ck F k þ ck F Ã Þ À Dk ,
where Dk ¼ ( À ck)kF k À F à k2. Since F à ¼ và À zÃ
ckvk À ckzkþ1, we obtain from the last inequality that

and

xkþ1 ¼ xk À ckF k þ

ðF k À F à ÞT ðxk À ck F k þ ck vk À ck zkþ1 À xà Þ

0

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

À ck ðF k À F à ÞT ðvk À và À zkþ1 þ zÃ Þ À Dk
¼ ðF k À F à ÞT ðxkþ1 À xÃ Þ À ck ðF k À F à ÞT ðvk À và À zkþ1 þ zÃ Þ À Dk :

ð14Þ

On the other hand, since g is strongly convex with modulus 40, it is obvious that @g
is strongly monotone with modulus  which implies that the mapping @g1 :¼ @g þ NC
is also strongly monotone with modulus . Thus, from zkþ1 2 @g1(xkþ1), we can write
2

ðxkþ1 À xà ÞT ðzkþ1 À zÃ Þ À kxkþ1 À xà k ! 0:

ð15Þ


Adding (14) to (15), and then using zà þ F(xà ) ¼ và and (13), we get
ðF k À F à þ zkþ1 À zà ÞT ðxkþ1 À xà Þ

0

2

À ck ðF k À F à ÞT ðvk À và À zkþ1 þ zÃ Þ À kxkþ1 À xà k À Dk


ðxk À xÃ Þ À ðxkþ1 À xà Þ
¼ ðxkþ1 À xà ÞT vk þ
À vÃ
ck
2

À ck ðF k À F à ÞT ðvk À và À zkþ1 þ zÃ Þ À kxkþ1 À xà k À Dk :
Now, we denote by x^ k :¼ xk À xà , x^ kþ1 :¼ xkþ1 À xà , v^k :¼ vk À và , z^kþ1 :¼ zkþ1 À zÃ
and F^ k :¼ F k À F Ã . We can write the last inequality as
2

2ck ðx^ kþ1 ÞT v^k À 2ðx^ kþ1 ÞT ðx^ kþ1 À x^ k Þ À 2c2k ðF^ k ÞT ðv^k À z^kþ1 Þ À 2ck kx^ kþ1 k À 2ck Dk ! 0:
ð16Þ
From (13), we have
2
2
2
kx^ kþ1 À x^ k k ¼ c2k kF^ k k þ c2k kz^kþ1 À v^k k À 2c2k ðF^ k ÞT ðv^k À z^kþ1 Þ:

Then using the identity

2

2

2

2ðx^ kþ1 ÞT ðx^ kþ1 À x^ k Þ ¼ kx^ kþ1 À x^ k k þ kx^ kþ1 k À kx^ k k ,
we obtain from (16) that
2
2
2
2ck ðx^ kþ1 ÞT v^k À ð1 þ 2ck Þkx^ kþ1 k þ kx^ k k À c2k kF^ k k
2

À c2k kv^k À z^kþ1 k À 2ck Dk ! 0:
Since rh is L-Lipschitz continuous, we have kv^k k
using the Chebyshev inequality that
kþ1 T k

2ðx^

Þ v^

kþ1

2kx^

k

kkv^ k


k

kþ1

2Lkx^ kkx^

k

ð17Þ
Lkx^ k k. It is easy to show by

kx^ kþ1 k
2L tkx^ k þ
t
k 2

2

!
,

8t 4 0:


69

Optimization
2


Thus we can replace 2ðx^ kþ1 ÞT v^k by 2Lðtkx^ k k þ kx^
definition of Dk to obtain

kþ1 2

k

t

Þ into (17) and using the

2
2
2
2
k kx^ k k À
k kx^ kþ1 k ! ck ð2 À ck ÞkF^ k k þ c2k kv^k À z^kþ1 k ,

where k ¼ 1 þ ck Lt,
k ¼ ð1 þ 2ck  À
proved.

ck LtÞ

and

t40.

The


proposition

ð18Þ
is
g

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

The following corollary proves the convergence of the proximal point sequence
{xk} by using estimation (12).
COROLLARY 2.5 Under the assumptions of Proposition 2.4, we suppose further that
 ! L. Then the sequence {xk} generated by (9) converges to a stationary point of
problem (1). Moreover, if either 4L or F is -strongly monotone then the sequence
{xk} linearly converges to a stationary point of (1).
Proof Suppose  ! L. Let m and M be two real numbers such that 05m
ck M52. If we choose t ¼ 1, it follows from (18) that
2

2

kxk À xà k À kxkþ1 À xà k !

mð2 À M Þ
2
kFðxk Þ À Fðxà Þk ! 0:
1 þ ML

Hence, {xk} is bounded and the sequence {kxk À xà k2} is convergent, since it is
nonincreasing and bounded from below by 0. Moreover, this inequality implies that
limk!1 F(xk) ¼ F(xà ). Note that, by (13), we have

xkþ1 À xk
¼ lim ðvk À zkþ1 À F k Þ ¼ lim ðvk À zkþ1 Þ À F Ã ,
k!1
k!1
k!1
ck

0 ¼ lim

which follows by the fact ÀF à ¼ zà À và that limk!1(vk À và À zkþ1 þ zà ) ¼ 0.
Let x1 be a limit point of the bounded sequence {xk} and {xk: k 2 K} be the
subsequence converging to x1. Since F is cocoercive, it is continuous. Thus
limk!1 F(xk) ¼ F(xà ) implies F(x1) ¼ F(xà ). By assumption that rh is L-Lipschitz,
we have kvk À và k Lkxk À xà k. Thus {vk} is bounded too. By taking again a
subsequence, if necessary, we may assume that the subsequence {vk: k 2 K} converges
to v1. Using the continuity of rh, we have v1 ¼ rh(x1). Now, we show that
v1 À F(x1) 2 @g1(x1). To this end, let z 2 @g1(x). Then it follows from the strong
monotonicity of @g1 that
0

kxk À xk

2

ðzk À zÞT ðxk À xÞ ¼ ðzk À zÞT ðxk À x1 Þ þ ðzk À zÞT ðx1 À xÞ

¼ ðzk À zÞT ðxk À x1 Þ þ ðzk À vkÀ1 À zÞT ðx1 À xÞ þ ðvkÀ1 ÞT ðx1 À xÞ:
Note that, from limk!1(vk À và À zkþ1 þ zà ) ¼ 0, we have
zkþ1 À vk ! zà À và ¼ ÀFðxà Þ:
Then, taking the limit on the left-hand side of the last inequality, we obtain

ðv1 À FðxÃ Þ À xÞT ðx1 À xÞ ! 0
which, by the maximal monotonicity of @g1 [17], implies that v1 À F(xà ) 2 @g1(x1).
Since F(x1) ¼ F(xà ), we have v1 À F(x1) 2 @g1(x1). Note that v1 ¼ rh(x1) we have
0 2 @g1(x1) þ F(x1) À rh(x1), which means that x1 is a stationary point of


70

L.D. Muu and T.D. Quoc

problem (1). Substituting xà into (12) by x1 and observing that {kxk À x1k} is
convergent, we can imply that the whole sequence {xk} converges to x1, because it
has a subsequence converging to x1.
Next,qitffiffiffifollows
from (12) that kkxk À xà k2 !
kkxkþ1 À xà k2. Thus if L5 then

k
0 5 r :¼
k 5 1 for every k ! 0. Hence, kxkþ1 À xà k rkxk À xà k, which shows that
the sequence {xk} converges linearly to xà .
If F is strongly monotone with modulus 40 then
2

kFðxk Þ À Fðxà Þkkxk À xà k ! ðFðxk Þ À Fðxà ÞÞT ðxk À xÃ Þ ! kxk À xà k :

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

Consequently,
2


2

kFðxk Þ À Fðxà Þk ! 2 kxk À xà k :
Substituting this inequality into (12), after a simple arrangement, we get
2

2

½ k À 2 ck ð2 À ck ފkxk À xà k !
k kxkþ1 À xà k :
Using assumption 05m
inequality that

ck

M52 and taking t ¼ 1, we obtain from the last
2

2

½1 þ ck L À 2 ck ð2 À ck ފkxk À xà k ! ð1 þ 2ck  À ck LÞkxkþ1 À xà k :
Since  þ 2 ð À m2 Þ 4 L, 1 þ ckL À 2ck(2 À ck)51 þ 2ck À ckL, it is easy to see
g
that {xk} converges linearly to xà .

3. Global solution methods
In this section, we propose solution methods for finding a global solution of MVIP
(1), where the cost function ’ may not be convex, but a DC function. The first
method uses the convex envelope of the cost function to convert a nonconvex MVIP

to a convex one. The second method is devoted to the case when the cost function ’
is concave. In this case, a global solution attains at an extreme point of the feasible
domain. This fact suggests that outer approximations which have widely used in
global optimization can be applied to nonconvex mixed variational inequalities.
First, we recall that the convex envelope of a function  on a convex set C is a
convex function conv  on C that satisfies the following conditions:
(i) conv (x) (x) for every x 2 C,
(ii) If l is convex on C and l(x) (x) for all x 2 C then l(x)
x 2 C.

conv (x) for all

We need the following lemma.
LEMMA 3.1 [6] Let  :¼ l þ 1 with l being an affine function. Suppose that C is a
polyhedral convex set. Then the convex envelope conv  of  on C is l þ conv 1, where
conv 1 denotes the convex envelope of 1 on C.
Using Lemma 3.1 we can prove the following proposition which states that the
problem (1) is equivalent to a convex MVIP whenever it admits a solution.


Optimization

71

PROPOSITION 3.2 Suppose that problem (1) is solvable. Then a point x, for which
conv ’(x) ¼ ’(x), is a global solution to problem (1) if and only if it is a solution of the
following convex MVIP:
Find x 2 C such that: FðxÞT ð y À xÞ þ conv ’ð yÞ À conv ’ðxÞ ! 0 for all y 2 C:
ð19Þ


Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

Proof For simplicity of notation, let us denote, respectively, the bifunction of
problem (1) by f, i.e.
f ðx, yÞ :¼ FðxÞT ð y À xÞ þ ’ð yÞ À ’ðxÞ,
and the bifunction of (19) by f,~ i.e.
~ yÞ :¼ FðxÞT ð y À xÞ þ conv ’ð yÞ À conv ’ðxÞ:
fðx,
It follows from our assumption that
~ yÞ ¼ FðxÞT ð y À xÞ þ conv ’ð yÞ À ’ðxÞ:
fðx,
~ ÁÞ is the
Since, for each fixed x, the function F(x)T(Á À x) is affine, by Lemma 3.1, fðx,
Ã
convex envelope of f (x, Á) on C. Suppose that x is a global solution to (1).
Then f (xà , y) ! 0 for every y 2 C. In virtue of Proposition 2.1, we have
È
É
mðxÃ Þ ¼ min f ðxà , yÞ: y 2 C ¼ 0:
Thus m(xà ) f (xà , y) for every y 2 C. Since the constant function m(xà ) (with respect
~ à , yÞ ! mðxÃ Þ ¼ 0 for every y 2 C, which
to the variable y) is convex, we have fðx
Ã
means that x is a global solution to the problem (19).
Conversely, if xà is a solution to (19) then, again by Proposition (2.1), one has
~ Ã Þ :¼ min f~ ðxà , yÞ:
0 ¼ mðx
y2C

~ à , yÞ f ðxà , yÞ and m(xà ) ¼ min fy2C(xà , y) which follows that

Since fðx
Ã
~ Þ mðxÞ. This inequality together with m(xà ) 0 implies m(xà ) ¼ 0.
0 ¼ mðx
g
Again by Proposition 2.1, we conclude that xà is a global solution to (1).
Proposition 3.2 suggests that instead of solving nonconvex MVIP (1) we can
solve convex MVIP (19). However, computing the convex envelope of a function
on a convex set, in general, is difficult except special cases. As a particular case,
we consider an important special class of ’ when ’ ¼ g À h with g affine, Àh concave.
In this case, the convex envelope of f (x, Á) is
À
Á
conv f ðx, yÞ ¼ FðxÞT ð y À xÞ þ gð yÞ þ convðÀhð yÞÞ À gðxÞ À hðxÞ :
Since Àh is concave, its convex envelope on a polyhedral convex set can be computed
with a reasonable effort, even explicitly when, for instance, the polyhedron is a
simplex [6].
Another global solution approach to problem (1) where ’ is a concave function
is outer approximation which has been widely used in global optimization. This
approach is based upon the fact that a global solution to problem (1) attains at an


72

L.D. Muu and T.D. Quoc

extreme point of the feasible set C in our specific case. In fact, by Proposition 2.1, x is
a global solution to MVIP (1) if and only if it is a solution to optimization problem
o
n

min FðxÞT ð y À xÞ þ ’ð yÞ À ’ðxÞ: y 2 C :
ð20Þ

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

y

Since ’ is concave, this mathematical programme attains its optimal solution at an
extreme point of C whenever it admits a solution. Thus, for each fixed x, problem
(20) has a solution at an extreme point of C.
Suppose that C is a compact, convex set. Then, we can describe an outer
approximation procedure for globally solving MVIP (1). Similar to global
optimization, in this outer approximation procedure, it starts from a simple
polyhedral convex set S0 which contains the feasible domain C, a nested sequence
{Sk} of polyhedral convex sets is constructed such that
S0 ' S1 ' Á Á Á ' Sk ' Á Á Á  C:
At each iteration k, we solve the following relaxed problem:
Find vk 2 Sk such that: Fðvk ÞT ð y À vk Þ þ ’ð yÞ À ’ðvk Þ ! 0

for all y 2 Sk :

ð21Þ

If it happens that vk 2 C, we are done. Otherwise, we continue the process by
constructing a new polyhedron Skþ1 containing C but does not contain vk and solve
the new relaxed problem. Namely, we can describe the algorithm in detail as follows.
Algorithm 1
Initialization. Take a simple polyhedral convex set S0, for example, a simplex,
containing C. Let V(S0) denote the set of vertices of S0.
Iteration k (k ¼ 0, 1, . . .). At the beginning of each iteration k, we have a polyhedral

convex set Sk whose vertex set Vk is known. For each v 2 Vk, solve the following
optimization problem:
n
o
ð22Þ
mSk ðvÞ :¼ min FðvÞT ð y À vÞ þ ’ð yÞ À ’ðvÞ: y 2 Vk :
Let vk 2 Vk such that mSk(vk) :¼ maxv2Vk mSk(v). If vk 2 C then terminate: vk is a global
solution to (1). Otherwise, construct a cutting hyperplane lk( y) such that lk(vk)40
and lk( y) 0 for every y 2 C and then define
n
o
Skþ1 :¼ y 2 Sk : lk ð yÞ 0 :
ð23Þ
Compute Vkþ1, the set of vertices of Skþ1. Increase k by 1 and repeat.
Convergence of Algorithm 1 depends upon the construction of the cutting
hyperplane. For example, when C is defined by C ¼ {y: c( y) 0} with c being a
closed convex, subdifferentiable function, one can determine a cutting hyperplane
by taking
lk ð yÞ :¼ cðvk Þ þ ð pk ÞT ð y À vk Þ,

ð24Þ

where pk 2 @c(vk).
In the case C having an interior point we can use the cutting plane determined in
the following lemma.


73

Optimization


LEMMA 3.3 [6] Let {vk} & Rn\C be a bounded sequence. Let v0 2 int C, yk 2 [v0,
vk]\int C, pk 2 @c( yk) and 0 k c( yk). If, for every k, the affine function
lk(x) :¼ (pk)T(x À yk) þ k satisfies
lk ðvk Þ 4 0,

lk ðxÞ

0,

8x 2 C

then every cluster point of the sequence {vk} belongs to C.

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014

The following theorem shows the convergence of the outer approximate algorithm.
THEOREM 3.4 Suppose that problem (1) is solvable. Suppose, in addition, that F and ’
are continuous on S0 and the cutting hyperplane used in the outer approximation
algorithm is given as in Lemma 3.3. Then, any cluster point of the sequence {vk} is a
global solution to problem (1).
We omit the proof of this theorem, since it can be done by a similar way as in
the proof of outer approximation schemes in global optimization (see, e.g. [6]).
We notice, as it is well-known in global optimization, that the number of newly
generated vertices of a polyhedron constructed by adding a cutting hyperplane to a
given polyhedron may increase very quickly in high-dimensional spaces. So this outer
approximation method is expected to work well only for problems of moderate
dimension.

4. A Cournot–Nash oligopolistic equilibrium model

As an example for DC mixed variational inequalities, we consider in this section a
Cournot–Nash oligopolistic market equilibrium model. In this model, it is assumed
that there are n-firms producing a common homogeneous
P commodity and that the
price pi of firm i depends on the total quantity  :¼ ni¼1 xi of the commodity.
Let hi(xi) denote the cost of the firm i when its production level is xi. Suppose that the
profit of firm i is given by
!
n
X
xi À hi ðxi Þ ði ¼ 1, . . . , nÞ,
ð25Þ
fi ðx1 , . . . , xn Þ ¼ xi pi
i¼1

where hi is the cost function of firm i that is assumed to be dependent only on its
production level.
Let Ci & R, (i ¼ 1, . . . , n) denote the strategy set of the firm i. Each firm seeks to
maximize its own profit by choosing the corresponding production level under the
presumption that the production of the other firms are parametric input. In this
context, a Nash equilibrium is a production pattern in which no firm can increase its
profit by changing its controlled variables. Thus under this equilibrium concept, each
firm determines its best response given other firms’ actions. Mathematically, a point
xà ¼ ðxÃ1 , . . . , xÃn Þ 2 C :¼ C1  Á Á Á  Cn is said to be a Nash-equilibrium point if
fi ðxÃ1 , . . . , xÃiÀ1 , yi , xÃiþ1 , . . . , xÃn Þ

fi ðxÃ1 , . . . , xÃn Þ,

8yi 2 Ci ,


8i ¼ 1, . . . , n:

ð26Þ

When hi is affine, this market problem can be formulated as a special
Nash-equilibrium problem in the n-person non-cooperative game theory, which in
turn is a strongly monotone variational inequality (see, e.g. [7]).


74

L.D. Muu and T.D. Quoc
Let
n
X

Éðx, yÞ :¼ À

fi ðx1 , . . . , xiÀ1 , yi , xiþ1 , . . . , xn Þ,

ð27Þ

i¼1

and
Èðx, yÞ :¼ Éðx, yÞ À Éðx, xÞ:

ð28Þ

Downloaded by [Universidad Autonoma de Barcelona] at 01:29 23 October 2014


Then, it has been proved [7] that the problem of finding an equilibrium point of this
model can be formulated as the following equilibrium problem in the sense of Blum
and Oettli [3] (see also [11]):
Find xà 2 C such that: Èðxà , yÞ ! 0

for all y 2 C:

ðEPÞ

In classical Cournot–Nash models [4,7], the price and the cost functions for each firm
are assumed to be affine of the forms
0 ! 0,

pi ðÞ  pðÞ ¼ 0 À
,


4 0,

with  ¼

n
X

xi ,

i¼1

hi ðxi Þ ¼ i xi þ i ,


i ! 0,

i ! 0

ði ¼ 1, . . . , nÞ:

In this case, using (25)–(28) it is easy to check that
~ þ  À ÞT ð y À xÞ þ yT Ay À xT Ax,
Èðx, yÞ ¼ ðAx
where

2




0

0

ÁÁÁ

0

3

6 0



0 ÁÁÁ 0 7
6
7
A¼6
7,
4ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ ÁÁÁ5
0
T

0

0

0

¼ ð 0 , . . . , 0 Þ and


×