Tải bản đầy đủ (.pdf) (10 trang)

Some Special Cases of Optimizing over the efficient set of generalized convex multiobjective programming problem

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (178.26 KB, 10 trang )

Some Special Cases of Optimizing over
the Efficient Set of a Generalized Convex
Multiobjective Programming Problem
Tran Ngoc Thang

Tran Thi Hue

School of Applied Mathematics and Informatics

Faculty of Management Information System

Hanoi University of Science and Technology

The Banking Academy of Vietnam

Email:

Email:

Abstract—Optimizing over the efficient set is a very hard

fi , i = 1, . . . , m are linear (resp. convex), called a

and interesting task in global optimization, in which local

linear multiobjective programming problem (resp. a con-

optima are in general different from global optima. At the

vex multiobjective programming problem), have received


same time, this problem has some important applications
in finance, economics, engineering, and other fields. In this

special attention in the literature (see the survey in [21]

article, we investigate some special cases of optimization

and references therein). However, to the best of our

problems over the efficient set of a generalized convex

knowledge, there is little result about numerical methods

multiobjective programming problem. Preliminary com-

in the nonconvex case where fi , i = 1, . . . , m, are

putational experiments are reported and show that the

nonconvex (see [4], [16],...).

proposed algorithms can work well.

The main problem in this paper is formulated as
AMS Subject Classification: 90 C29; 90 C26
min Φ(x) s.t. x ∈ XE ,

(PX )

Keywords: Global optimization, Efficient set, Generalized convexity, Multiobjective programming problem


where Φ : X → R is a continuous function and XE is
the efficient solution set for problem (GM OP ), i.e.

I. I NTRODUCTION
XE = {x0 ∈ X | ∃x ∈ X : f (x0 ) ≥ f (x), f (x0 ) = f (x)}.
The generalized convex multiobjective programming
problem (GM OP ) is given as follows
Minf (x) = (f1 (x), . . . , fm (x))T s.t. x ∈ X ,

As usual, the notation y 1 ≥ y 2 , where y 1 , y 2 ∈ Rm , is
used to indicate yi1 ≥ yi2 for all i = 1, . . . , m.
It is well-known that, in general, the set XE is non-

where X ⊂ Rn is a nonempty convex compact set

convex and given implicitly as the form of a standard

and fi , i = 1, . . . , m, are generalized convex functions

mathematical programming problem, even in the case

on X . In the case m = 2, problem (GM OP ) is

m = 2, the objective functions f1 , f2 are linear and

called a generalized convex biobjective programming

the feasible set X is polyhedral. Hence, problem (PX )


problem. The special cases of problem (GM OP ), where

is a global optimization problem and belongs to NP-


hard problem class. This problem has many applications
in economics, finance, engineering, and other fields.
Recently this problem has a great deal of attention

In the case h is differentiable, if h is quasiconvex on
X , we have
h(x1 ) − h(x2 ) ≤ 0 ⇒ ∇h(x2 ), x1 − x2 ≤ 0

(1)

from researchers (for instance, see [1], [5], [6], [7],
[8], [12], [15], [16] [17], [19] and references therein).
Like problem (GM OP ), there is only few numerical

for all x1 , x2 ∈ X , where ∇h(x2 ) is the gradient vector
of h at x2 (see Theorem 9.1.4 in [10]).

algorithms to solve problem (PX ) in the nonconvex

A vector function f (x) = (f1 (x), . . . , fm (x))T is

case (see [1], [16],...). In this article, simple convex

called convex (resp., pseudoconvex, quasiconvex) on X if


programming procedures are proposed for solving three

its component functions fi (x), i = 1, . . . , m, are convex

special cases of problem (PX ) where XE is the efficient
solution set for problem (GM OP ) in the nonconvex

(resp., pseudoconvex, quasiconvex) functions on X .
Recall that a vector function f is called scalarly
m
i=1

case. These special-case procedures require quite little

pseudoconvex on X if

computational effort in comparison to ones required by

X for every λ = (λ1 , . . . , λm ) ≥ 0 (see [16]).

algorithms for the general problem (PX ).
In Section 2, the theoretical preliminaries are presented
to analyze three special cases of optimization over the
efficient solution set of problem (GM OP ). Section 3

λi fi is pseudoconvex on

By definition, if f is convex then it is scalarly pseudoconvex, and f is scalarly pseudoconvex then it is pseudoconvex. Hence, the convex multiobjective programming
problem is a special case of problem (GM OP ).


proposes the algorithms to solve these cases, including

Example II.1. Consider the vector function f (x) over

some computational experiments to illustrate the algo-

the set X = {x ∈ R2 | Ax ≥ b, x ≥ 0}, where

rithms. Some conclusions are given in the last section.
II. T HEORETICAL P RELIMINARIES

f (x) =
and



First, recall that a differentiable function h : X → R



A=


is called pseudoconvex on X if
∇h(x2 ), x1 − x2 ≥ 0 ⇒ h(x1 ) − h(x2 ) ≥ 0
for all x1 , x2 ∈ X . For example, by Proposition 5.20 in
[2], the fractional function r(x)/l(x), where r : Rn → R
is convex on X and l : Rn → R is linear such that

x22 + x1

−x21 − 0.6x1 + 0.5x2
,
x1 − x2 − 2
x2 − x1 + 2

−1

1





2





,
b
=


3
2
6


−2 −1

−10





.


We have
λT f (x) =

λ1 (x21 + 0.6x1 − 0.5x2 ) + λ2 (x22 + x1 )
.
x2 − x1 + 2

It is easily seen that
r(x) = λ1 (x21 + 0.6x1 − 0.5x2 ) + λ2 (x22 + x1 )

l(x) > 0 for all x ∈ X , is a pseudoconvex function.
By the definition in [10, tr. 132], a function h is called
quasiconvex on X if
h(x1 ) − h(x2 ) ≤ 0 ⇒ h(λx1 + (1 − λ)x2 ) ≤ h(x2 ),
for all x1 , x2 ∈ X and 0 ≤ λ ≤ 1. If h is quasiconvex
then g := −h is quasiconcave.

is convex because λ1 , λ2 ≥ 0, and l(x) = x2 −x1 +2 > 0
for all x ∈ X . Therefore, by Proposition 5.20 in [2], the
function λT f (x) = r(x)/l(x) is pseudoconvex , i.e. f (x)
is scalarly pseudoconvex.

Let
Y = {y ∈ Rm |y = f (x) for some x ∈ X }.


As usual, the set Y is said to be the outcome set for

to the outcome set Y and the objective function Φ(x) of

problem (GM OP ).

problem (PX ) is pseudoconvex on X .

Let

yiI

clear that

| y ∈ Y}, i = 1, . . . , m. It is

Case 2. The feasible set XE is the efficient solution set

is also the optimal value of the following

of problem (GBOP ) and the objective function Φ(x)

= min{yi
yiI

programming problem


has the form as Φ(x) = ϕ(f (x)) where ϕ : Y → R and

min fi (x) s.t. x ∈ X .

(Pi )

ϕ(x) = λ, y

(2)

For each i ∈ {1, . . . , m}, if fi is pseudoconvex, we can

with λ = (λ1 , λ2 )T ∈ R2 . This case could happen

apply convex programming algorithms to solve problem

in certain common situations, for instance, when the

(Pi ) (Remark 2.3 in [3]).

objective function of problem (PX ) represents the linear

I
The point y I = (y1I , . . . , ym
) is called the ideal point

of the set Y. Notice that the ideal point y I need not
belong to Y.


composition of the criteria fi (x), i ∈ {1, 2} with the
weighted coefficients λi , i ∈ {1, 2}.
Case 3. The feasible set XE is the efficient solution set
of problem (GBOP ) and the objective function Φ(x)
has the form as Φ(x) = ϕ(f (x)) where ϕ : Y → R is
quasiconcave and decreasing monotonic.
Let Q ⊂ Rm be a nonempty set. A point q 0 ∈ Q
is called an efficient point of the set Q if there is no
q ∈ Q such that q 0 ≥ q and q 0 = q. The set of all
efficient points of Q is denoted by MinQ. It is clear that
0
a point q 0 ∈ MinQ if Q ∩ (q 0 − Rm
+ ) = {q }, where
m
Rm
+ = {y ∈ R |yi ≥ 0, i = 1, . . . , m}.

Fig. 1.

Since the functions fi , i = 1, . . . , m, are continuous

The ideal point y I

and X ⊂ Rn is a nonempty compact set, the outcome
Consider a generalized convex biobjective programming problem

set MinY is nonempty [9]. Let
T

Minf (x) = (f1 (x), f2 (x)) s.t. x ∈ X ,

where X := {x ∈ R

set Y is also compact set in Rm . Therefore, the efficient

n

(GBOP )

| gj (x) ≤ 0, j = 1, . . . , k},

the functions gj (x), j = 1, . . . , k, are differentiable
quasiconvex on Rm and the objective function f is

YE = {f (x)|x ∈ XE }.

The set YE is called the efficient outcome set for problem
(GM OP ). By definition, it is easy to see that
YE = MinY.

scalarly pseudoconvex on X .
Now we will describe the three sets of conditions
associated with three special cases of problem (PX )
under consideration.
Case 1. The feasible set XE is the efficient solution
set of problem (GM OP ), the ideal point y I belongs

(3)

(4)


The relationship between the efficient solution set XE
and the efficient set MinY is described as follows.
Proposition II.1.
i) For any y 0 ∈ MinY, if x0 ∈ X satisfies f (x0 ) ≤ y 0 ,
then x0 ∈ XE .


ii) For any x0 ∈ XE , if y 0 = f (x0 ), then y 0 ∈ MinY.
Proof: i) Since y 0 ∈ MinY, by definition, we have
0
0
0
m
(y 0 − Rm
+ ) ∩ Y = y . Moreover, f (x ) ∈ (y − R+ ) and

f (x0 ) ∈ Y because x0 ∈ X and f (x0 ) ≤ y 0 . Therefore,
0

0

f (x ) = y ∈ MinY. Combined this fact with (3) and
(4), we imply x0 ∈ XE .

The following interesting property of Z (see Theorem
3.2 in [20]) will be used in the sequel .
Proposition II.2. MinZ = MinY.
Now we will consider the first special case where the
ideal point y I belongs to the outcome set Y.
Proposition II.3. If y I ∈ Y then MinY = {y I }.


ii) This fact follows immediately from (3) and (4).
Proof: Since y I ∈ Y, there exists xI ∈ X such that
y I = f (xI ), i.e. yiI = fi (xI ) for all i = 1, 2, . . . , m. For
each i ∈ {1, 2, . . . , m}, since yiI is the optimal value
of problem (Pi ), xI is an optimal solution of problem
(Pi ). Hence, xI ∈ Argmin{fi (x) | x ∈ X } for all
i = 1, . . . , m. By definition, xI is an efficient solution
of problem (GM OP ), i.e. xI ∈ XE . From Proposition
II.1(ii), it implies y I ∈ MinY. Since yiI is the optimal
value of problem (Pi ) for i = 1, 2, . . . , m, by definition
of efficient points, y I is the only efficient point of Y.
Let
Fig. 2.

The efficient set MinY

X id = {x ∈ X | fi (x) ≤ yiI , i = 1, 2, . . . , m}.
By [10, Theorem 9.3.5], for each i = 1, 2, . . . , m,

Let

if fi is the pseudoconvex function, fi is quasiconvex.
Z=Y+

Rm
+

= {z ∈ R


m

| z ≥ y for some y ∈ Y}.

Therefore, X id is a convex set because every lower level
set of continuous quasiconvex functions is convex. The

It is clear that Z is a nonempty, full-dimension closed
set but it is nonconvex in general.

following assertion provides a property to detect whether
y I belongs to Y.
Proposition II.4. If X id is not empty then y I ∈ Y and
XE = X id . Otherwise, y I does not belong to Y.
Proof: By definition, if X id = ∅, y I ∈ Z. Since
Y ⊆ Z, y I ∈ Y. Otherwise, X id is not empty, y I ∈ Z.
Therefore, y I ∈ Y because yiI is the optimal value of
problem (Pi ) for i = 1, 2, . . . , m. By Proposition II.3
and Proposition II.1(i), we get MinY = {y I } and XE =
{x ∈ X | f (x) ≤ y I } = X id .
In the next two cases, we consider problem (PX )

Fig. 3.

The set Z

where XE is the efficient solution set of problem


(GBOP ) and Φ(x) has the form as Φ(x) = ϕ(f (x))


¯ y − y¯ ≥ 0 for all y ∈ Y. For i = 1, 2, set
i.e. λ,

where ϕ : Y → R. Then the outcome-space reformula-

ˆi = λ
¯ i if y¯i = w
ˆ i = 0 if y¯i = w
λ
¯i and λ
¯i . It is easy to

tion of problem (PX ) can be given by

check that

min ϕ(y) s.t. y ∈ YE .

ˆ w
ˆ ≥ 0, λ
ˆ = 0.
λ,
¯ − y¯ = 0 and λ

(PY )

Combining (4) and Proposition II.2, problem (PY ) can
be rewritten as follows


{y ∈ R

min ϕ(y) s.t. y ∈ MinZ.

(PZ )

Therefore, instead of solving problem (PY ), we solve
problem (PZ ).

ˆ y−w
λ,
¯

Hence,
m

(7)

≥ 0 for all y ∈ Z. Set H(w)
¯ =

ˆ y−w
¯
| λ,

≥ 0}. Then Z ⊂ H(w)
¯ for all

w
¯ ∈ ∂Z. By [14, Theorem 6.20], we imply that Z is a

convex set.
Since Z is a nonempty convex subset in R2 by
Proposition II.5, it is well known [13] that the efficient set

Proposition II.5. If f is scalarly pseudoconvex, the set

MinZ is homeomorphic to a nonempty closed interval

Z = f (X) + R2+ is a convex set in R2 .

of R. By geometry, it is easily seen that the problem

Proof: Let w
¯ be an arbitrary point in the boundary

min{y2 : y ∈ Z, y1 = y1I }

(PS )

∂Z of the set Z. By geometry, there exists y¯ ∈ MinZ
such that y¯ ≤ w.
¯ From Proposition II.2, we imply that
y¯ ∈ MinY. Hence, by Proposition II.1(ii), there exists

has a unique optimal solution y S and the problem
min{y1 : y ∈ Z, y2 = y2I }

(PE )

x

¯ ∈ X such that y¯ = f (¯
x) and x
¯ is an efficient solution
has a unique optimal solution y E . Since Z is convex,

of problem (GM OP ).
Let J = {j ∈ {1, . . . , k} | gj (¯
x) = 0} and s = |J| .

problems (PS ) and (PE ) are convex programming prob-

For any vector a ∈ Rs , we denote aJ := {aj , j ∈ J}.

lems. If y I ∈ Y then, by Propositions II.3 and II.4, y I

Since x
¯ ∈ XE , by [11, Corollary 3.1.6], there exist a

is the only optimal solution to problem (PZ ) and X id is

¯ ∈ R2 \ {0} and a vector µ
¯J ∈ Rs+ , such that
vector λ
+

the optimal solution set of problem (PX ).

¯ T ∇f (¯
λ
x) + µ

¯TJ ∇gJ (¯
x) = 0 which means
¯ T ∇f (¯
λ
x) = −¯
µTJ ∇gJ (¯
x).

(5)

Since µ
¯J ≥ 0, gJ (x) ≤ 0 for all x ∈ X and
gJ (¯
x) = 0, we have µ
¯TJ gJ (x) − µ
¯TJ gJ (¯
x) ≤ 0 for
all x ∈ X . Combined this fact with the condition
gj , j = 1, . . . , k, are differentiable quasiconvex and (1),
we get µ
¯TJ ∇gJ (¯
x), x − x
¯ ≤ 0 for all x ∈ X . Thus, by
¯ T ∇f (¯
(5), one has λ
x), x − x¯ ≥ 0 or
¯ T f (¯
x) , x − x
¯ ≥0
∇ λ


∀x ∈ X .

(6)

¯T f is pseudoconvex on X because f is
Moreover, λ
scalarly pseudoconvex on X . Therefore, (6) implies that
¯ T f (x) − λ
¯ T f (¯
λ
x) ≥ 0 ∀x ∈ X ,

Fig. 4.

The efficient curve MinZ

If y I ∈ Y then y S = y E and the efficient set MinZ
is a curve on the boundary of Z with starting point y S


and the end point y E such that

Consider the following convex problem

y1E > y1S and y2S > y2E .

˜
min λ, y s.t. y ∈ Z.


(8)

Note that we also get the efficient solutions xS , xE ∈ XE

(CP 0 )

that has the explicit reformulation as follows

such that y S = f (xS ) and y E = f (xE ) while solving

min

problems (PS ) and (PE ). For the convenience, xS , xE

s.t.

is called to be the efficient solutions respect to y S , y E ,

λ, y
f (x) − y ≤ 0
x ∈ X,

respectively.

(CP 1 )

c, y ≤ α,

In the second case, we consider problem (PZ ) where
ϕ(y) = λ, y . Direct computation shows that the equation of the line through y S and y E is c, y = α, where



c = E 1 S , S 1 E ,
y1 −y1 y2 −y2
(9)

α = Ey1E S + Sy2E E .
y −y
y −y
1

1

2

2

where the vector c ∈ R2 and the real number α is
determined by (9).
Proposition II.6. Suppose that (x∗ , y ∗ ) is an optimal
solution of problem (CP 1 ). Then x∗ is an optimal
solution of problem (PX ).

From (8), it is easily seen that the vector c is strictly

Proof: It is well known that a convex programming

positive. Now, let

problem with the linear objective function has an optimal


Z˜ = {y ∈ Z | c, y ≤ α}

solution which belongs to the extreme point set of the

and

feasible solution set [18]. Therefore, problem (CP 0 ) has
Γ = ∂ Z˜ \ (y S , y E ),

an optimal solution y ∗ ∈ Γ. This fact and (10) implies

where (y S , y E ) = {y = ty S + (1 − t)y E | 0 < t < 1}

˜ it implies that y ∗ is
that y ∗ ∈ MinZ. Since MinZ ⊂ Z,

˜
and ∂ Z˜ is the boundary of the set Z.

an optimal solution of problem (PZ ).
Since MinZ = YE = MinY, by definition, we have
λ, y ∗ ≤ λ, y for all y ∈ YE and y ∗ ∈ MinY. Then
λ, y ∗ ≤ λ, f (x) , ∀x ∈ XE .

(11)

Since (x∗ , y ∗ ) is a feasible solution of problem (CP 1 ),
we have f (x∗ ) ≤ y ∗ . By Proposition II.1, x∗ ∈ XE .
Furthermore, we have f (x∗ ) ∈ Y and y ∗ ∈ MinY.

The definition of efficient points infers that y ∗ = f (x∗ ).
Combining this fact and (11), we get Φ(x) ≥ Φ(x∗ ) for
all x ∈ XE which means x∗ is an optimal solution of
Fig. 5.

problem (PX ).

The convex set Z˜

It is clear that Z˜ is a compact convex set because Z

In the last case, we consider problem (PZ ) where the

is convex. By the definition and geometry, we can see

function ϕ : Y → R is quasiconcave and decreasing

that Γ contains the set of all extreme points of Z˜ and

monotonic. The following assertion presents the special

MinZ = Γ.

(10)

property of the optimal solution to problem (PZ ).


By Proposition II.4, to detect whether the ideal point
y I belongs to Y and solve problem (PX ) in this case,

we solve the following problem
min Φ(x) s.t. x ∈ X id .

(CP id )

Since Φ(x) is pseudoconvex on X and X id is convex,
we can apply convex programming algorithms to solve
problem (CP id ) (Remark 2.3 in [3]). The procedure for
this case is described as follows.
Procedure 1.
Fig. 6.

The illustration to Case 3

Step 1. For each i = 1, . . . , m, find the optimal value yiI

Proposition II.7. If the function ϕ is quasiconcave

of problem (Pi ).

and decreasing monotonic then the optimal solution of

Step 2. Solve problem (CP id ).

problem (PZ ) is attained at either y S or y E .

If problem (CP id ) is not feasible Then STOP (Case 1

Proof: Let Z △


=

conv{y S , y I , y E }, where

does not apply)

conv{y S , y I , y E } stands for the convex hull of the points

Else Find an optimal solution x∗ to problem (CP id ).

{y S , y I , y E }. Since MinZ ⊂ Z △ , we have

STOP (x∗ is an optimal solution to problem (PX )).

min{ϕ(y) | y ∈ MinZ} ≥ min{ϕ(y) | y ∈ Z △ }. (12)

Below we present a numerical example to illustrate
Procedure 1.

It is obvious that the optimal solution of the problem
min{ϕ(y) | y ∈ Z △ }, where the objective function ϕ

Example III.1. Consider the problem (PX ), where XE

is quasiconcave, belongs to the extreme point set of Z △

is the efficient solution set to the following problem

[18]. Therefore,


Min

Argmin{ϕ(y) | y ∈ Z △ } ∈ {y S , y I , y E }.

(f1 (x), f2 (x)) = (0.5x21 − x1 + 0.3x2 , x22 + x1 )

s.t. x21 + x22 − 4x1 − 4x2 ≤ −6

Moreover, since ϕ is also decreasing, we have

−x1 + x2 ≥ α

Argmin{ϕ(y) | y ∈ Z △ } ∈ {y S , y E }.

x1 + x2 ≥ 2
x1 ≥ 1

Since y S , y E ∈ MinZ, this fact and (12) imply
min{ϕ(y) | y ∈ MinZ} = min{ϕ(y) | y ∈ Z △ }

and Φ(x) = min{0.5x21 + x2 + 0.2; 2x22 − 4.6x1 + 5.8}.


and Argmin{ϕ(y) | y ∈ MinZ} ∈ {y S , y E }.
III. P ROCEDURES

AND

C OMPUTING E XPERIMENTS


In the case α = 0:

Step 1. Solving problems (P1 ) and (P2 ), we obtain the
ideal point y I = (−0.2000, 2.0000).

Case 1. The feasible set XE is the efficient solution

Step 2. Solving problem (CP id ), we can find an optimal

set of problem (GM OP ), the ideal point y I belongs

solution x∗ = (1.0000, 1.0000). Then x∗ is the optimal

to the outcome set Y and the objective function Φ(x) of

solution to problem (PX ) and Φ(x∗ ) = 1.7000 is the

problem (PX ) is pseudoconvex on X .

optimal value of problem (PX ) .




X = x ∈ R2 | (x1 − 1)2 + (x2 − 2)2 ≤ 1, 2x1 − x2 ≤ 1 ,

In the case α = −1:

Step 1. Solving problems (P1 ) and (P2 ), we obtain the
ideal point y I = (−0.2299, 1.8917).

Step 2. Solving problem (CP id ), we can find that it is
not feasible. It means that the ideal point y I does not
belong to Y.

and Φ(x) = λ1 f1 (x) + λ2 f2 (x).
It is easily seen that the function f is scalarly pseudoconvex on X because f1 , f2 are convex on X . Therefore,
we can apply Procedure 2 to solve this problem.

Case 2. The feasible set XE is the efficient solution set of

Step 1. The optimal value of the problems (P1 ) and (P2 ),

problem (GBOP ) and the objective function Φ(x) has

respectively, is y1I = 1.0000 and y2I = 1.0000.

the form as (2).

Step 2. Solving problem (CP2id ), we can find that it is

In this case, the procedure for solving problem (PX )
is established by Proposition II.4 and Proposition II.6.

not feasible. Then go to Step 3.
Step 3. Solve problems (PS ) and (PE ) to obtain

Recall that if y I ∈ Y, X id = Argmin(PX ). Therefore,

y S = (1.9412, 1.0000), y E = (1.0000, 1.9326)


we can obtain an optimal solution of problem (PX ) by
solving the following convex programming problem
min e, x s.t. x ∈ X id ,

(CP2id )

and
xS = (0.9612, 2.9991), xE = (0.0011, 2.0435)

where e = (1, . . . , 1) ∈ Rn .

respect to y S , y E , respectively.

Procedure 2.

Step 4. For each λ = (λ1 , λ2 ) ∈ R2 , solve problem

Step 1. For each i = 1, 2, find the optimal value yiI of

(CP 1 ) to find the optimal solution (x∗ , y ∗ ). Then x∗ is

problem (Pi ).

an optimal solution and Φ(x∗ ) is the optimal value of

Step 2. Solve problem (CP2id ).

(PX ). The computational results are shown in Table I.

If problem (CP2id ) is not feasible Then Go to Step 3


λ

x∗

y∗

Φ(x∗ )

(y ∈ Y).

(0.0, 1.0)

(0.9612, 2.9991)

(1.9412, 1.0000)

1.0000

Else Find an optimal solution x∗ to the problem (CP2id ).

(0.2, 0.8)

(0.4460, 2.8325)

(1.1989, 1.0281)

1.0622

(0.5, 0.5)


(0.2929, 2.7071)

(1.0858, 1.0858)

1.0858

(0.8, 0.2)

(0.1675, 2.5540)

(1.0208, 1.1989)

1.0622

Step 3. Solve problem (PS ) and problem (PE ) to find the

(1.0, 0.0)

(0.0011, 2.0435)

(1.0000, 1.9326)

1.0000

efficient points y S , y E and the efficient solutions xS , xE

(−0.2, 0.8)

(0.9654, 2.9992)


(1.9412, 1.0000)

0.4118

(0.8, −0.2)

(0.0011, 2.0435)

(1.0000, 1.9326)

0.4146

I

STOP (x∗ is an optimal solution to problem (PX )).

respect to y S , y E , respectively.

TABLE I

1

Step 4. Solve problem (CP ) to find an optimal solution

C OMPUTATIONAL RESULTS OF E XAMPLE III.2

(x∗ , y ∗ ). STOP (x∗ is an optimal solution to (PX )).
Below are some examples to illustrate Procedure 2.
Example III.2. Consider problem (PX ), where XE is

the efficient solution set to problem (GBOP ) with
f1 (x) =

x21

2

+ 1, f2 (x) = (x2 − 3) + 1,

Example III.3. Consider problem (PX ), where Φ(x) =
λ1 f1 (x) + λ2 f2 (x) and XE is the efficient solution set
to problem (GBOP ) with
f (x) =

x22 + x1
−x21 − 0.6x1 + 0.5x2
,
x1 − x2 − 2
x2 − x1 + 2


and X = x ∈ R2 | Ax ≥ b, x ≥ 0 , where




−1
1
2









A= 3
2 , b = 
6 .




−2 −1
−10

In this case, the procedure for solving problem is
established by Proposition II.4 and Proposition II.7. Let
xopt be an optimal solution of problem (PX ).
Procedure 3.

By Example II.1, it is verified that f is scalarly

Step 1. For each i = 1, 2, find the optimal value yiI of

pseudoconvex on X . Therefore, we can apply Procedure

problem (Pi ).


2 to solve this problem.

Step 2. Solve problem (CP2id ).

Step 1. The optimal value of the problems (P1 ) and (P2 ),

If problem (CP2id ) is not feasible Then Go to Step 3

respectively, is y1I = −0.4167 and y2I = 1.5400.

(y I ∈ Y).

Step 2. Solving problem (CP2id ), we can find that it is

Else Find an optimal solution x∗ to the problem (CP2id ).

not feasible. Then go to Step 3.

STOP (x∗ is an optimal solution to problem (PX )).

Step 3. Solve problems (PS ) and (PE ) to obtain

Step 3. Solve problem (PS ) and problem (PE ) to find the

y S = (−0.2000, 1.5400), y E = (−0.4167, 8.3332)

efficient points y S , y E and the efficient solutions xS , xE
respect to y S , y E , respectively.

and


Step 4. If ϕ(y S ) > ϕ(y E ) Then xopt = xE Else xopt =
xS = (0.4000, 2.4000), xE = (0.0000, 9.9997)

respect to y S , y E , respectively.

xS . STOP (xopt is an optimal solution to problem (PX )).
We give below an example to illustrate Procedure 3.

Step 4. For each λ = (λ1 , λ2 ) ∈ R2 , solve problem
(CP 1 ) to find the optimal solution (x∗ , y ∗ ). Then x∗ is

Example III.4. Consider problem (PX ), where XE is
the efficient solution set to problem (GBOP ) with

an optimal solution and Φ(x∗ ) is the optimal value of
(PX ). The computational results are shown in Table II.

f1 (x) = x21 + 2x1 x2 + 3x22 , f2 (x) = x22 − 0.5x1 + 0.3x2 ,
X = x ∈ R2 | g1 (x) ≤ 0, g2 (x) ≤ 0 ,

λ

x∗

y∗

Φ(x∗ )

(0.0, 1.0)


(0.4000, 2.4000)

(−0.2000, 1.5400)

1.5400

g1 (x) = 9(x1 −2)2 +4(x2 −3)2 −36, g2(x) = x1 −2x2 −3,

(0.2, 0.8)

(0.4000, 2.4000)

(−0.2000, 1.5400)

1.1920

(0.5, 0.5)

(0.4000, 2.4000)

(−0.2000, 1.5400)

0.6700

(0.8, 0.2)

(0.0900, 2.8650)

(−2.2870, 1.7378)


0.1180

(1.0, 0.0)

(0.0000, 9.9997)

(−0.4167, 8.3332)

−0.4167

(−0.2, 0.8)

(0.4000, 2.4000)

(−0.2000, 1.5400)

1.2720

pseudoconvex on X because f1 , f2 are convex on X .

(0.8, −0.2)

(0.0000, 9.9997)

(−0.4167, 8.3332)

−2.0000

Moreover, the function ϕ is quasiconcave and decreasing


TABLE II
C OMPUTATIONAL RESULTS OF E XAMPLE III.3

and Φ(x) = ϕ(f (x)) with ϕ(y) = −y12 − y22 .
It is easily verified that the function vector f is scalarly

monotonic on R2 . Therefore, we can apply Procedure 3
to solve this problem.

Case 3. The feasible set XE is the efficient solution
set of problem (GBOP ) and the objective function Φ(x)

Step 1. The optimal value of the problems (P1 ) and (P2 ),
respectively, is y1I = 2.2192 and y2I = −1.2623.

has the form as Φ(x) = ϕ(f (x)) where ϕ : Y → R is

Step 2. Solving problem (CP2id ), we can find that it is

quasiconcave and decreasing monotonic.

not feasible. Then go to Step 3.


Step 3. Solve problem (PS ) and problem (PE ) to obtain

[6] J. Fulop and L. D. Muu, “Branch-and-bound variant of an
outcome-based algorithm for optimizing over the efficient set of a


y S = (9.2894, −1.2623), y E = (2.2192, −0.4012)

bicriteria linear programming problem”, J. Optim. Theory Appl.,
vol. 105, pp. 37-54, 2000.

and

[7] R. Horst, N. V Thoai, Y. Yamamoto and D. Zenke, “On optimiza-

xS = (2.7848, 0.2406), xE = (1.1432, 0.2892)
respect to y S , y E , respectively.
S

E

Step 4. Since ϕ(y ) < ϕ(y ), the optimal solution to
problem (PX ) is xopt = xS = (2.7874, 0.2406) and the

tion over the efficient set in linear multicriteria programming”, J.
Optim. Theory Appl., vol. 134, pp. 433-443, 2007.
[8] N. T. B. Kim and T. N. Thang, “Optimization over the Efficient Set
of a Bicriteria Convex Programming Problem”, Pacific J. Optim.,
vol. 9, pp. 103-115, 2013
[9] D. T. Luc, Theory of Vector Optimization, Springer-Verlag, Berlin,
Germany, 1989.

optimal value of problem (PX ) is Φ(xE ) = −87.8812.

[10] O. L. Mangasarian, Nonlinear Programming, McGraw-Hill, New


IV. C ONCLUSION

[11] K. Miettinen, Nonlinear Multiobjective Optimization, Kluwer

York, 1969.

In this article, we have developed the simple convex

Academic Publishers, Boston, 1999.
[12] L. D. Muu and L. Q. Thuy, “Smooth optimization algorithms

programming procedures for solving three special cases

for optimizing over the Pareto efficient set and their application

of optimization problem over the efficient set (PX ).

to Minmax flow problems”, Vietnam J. Math., vol. 39, no. 1, pp.

These special case procedures require quite little com-

31-48, 2011.

putation effort in comparision to that required to solve
the general case because only some convex programming
problems need solving. Therefore, they can be used as

[13] H. X. Phu, “On efficient sets in R2 ”, Vietnam J. Math, vol. 33,
pp. 463-468, 2005.
[14] R. T. Rockafellar and R. B. Wets, Variational Analysis, SpringerVerlag, Berlin, Germany, 2010.

[15] T. N. Thang and N. T. B. Kim, “Outcome space algorithm

screening devices to detect and solve these special cases.
ACKNOWLEDGMENT
This research is funded by Hanoi University of Science
and Technology under grant number T2016-TC-205.
R EFERENCES

for generalized multiplicative problems and optimization over the
efficient set”, J. Ind. Manag. Optim., vol. 12, no. 4, pp. 1417 1433, 2016.
[16] T. N. Thang, D. T. Luc and N. T. B. Kim, “Solving generalized
convex multiobjective programming problems by a normal direction method”, Optim., vol. 65, no. 12, pp. 2269-2292, 2016.
[17] N. V. Thoai, “Reverse convex programming approach in the space
of extreme criteria for optimization over efficient sets”, J. Optim

[1] L. T. H. An, P. D. Tao, N. C. Nam and L. D. Muu, “Method for

Theory and Appl., vol.147, pp. 263-277, 2010.

optimizing over efficient and weakly efficient the sets of an affine

[18] H. Tuy, Convex Analysis and Global Optimization, Kluwer, 1998.

fractional vector optimization program”, Optim., vol. 59, no. 1, pp.

[19] Y. Yamamoto, “Optimization over the efficient set: overview”, J.

77-93, 2010.
[2] M. Avriel, W. E. Diewert, S. Schaible and I. Zang, Generalized
Concavity, Plenum Press, New York, 1998.


Global Optim., vol. 22, pp. 285-317, 2002.
[20] P. L. Yu, Multiple-Criteria Decision Making. Plenum Press, New
York and London, 1985.

[3] H. P. Benson, “On the Global Optimization of Sums of Linear

[21] M. M. Wiecek, M. Ehrgott and A. Engau, “Continuous multiob-

Fractional Functions over a Convex Set”, J. Optim. Theory Appl.,

jective programming”, Multiple criteria decision analysis: State of

vol. 121, no. 1, pp. 19-39, 2004.
[4] H. P. Benson, “A global optimization approach for generating
efficient points for multiobjective concave fractional programs”,
J. Multi Criteria Decis. Anal., vol. 13, pp. 15–28, 2005.
[5] H. P. Benson, “An outcome space algorithm for optimization
over the weakly efficient set of a multiple objective nonlinear
programming problem”, J. Glob. Optim., vol. 52, pp. 553- 574,
2012.

the art surveys, Oper. Res. Manag. Sci., Springer, New York, vol.
233, pp. 739-815, 2016.



×