Tải bản đầy đủ (.pdf) (17 trang)

Báo cáo hóa học: " Research Article Necessary Conditions of Optimality for Second-Order Nonlinear Impulsive Differential Equations" docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (582.7 KB, 17 trang )

Hindawi Publishing Corporation
Advances in Difference Equations
Volume 2007, Article ID 40160, 17 pages
doi:10.1155/2007/40160
Research Article
Necessary Conditions of Optimality for Second-Order Nonlinear
Impulsive Differential Equations
Y. Peng, X. Xiang, and W. Wei
Received 2 February 2007; Accepted 5 July 2007
Recommended by Paul W. Eloe
We discuss the existence of optimal controls for a Lagrange problem of systems governed
by the second-order nonlinear impulsive differential equations in infinite dimensional
spaces. We apply a direct approach to derive the maximum principle for the problem at
hand. An example is also presented to demonstrate the theory.
Copyright © 2007 Y. Peng et al. This is an open access article distributed under the Cre-
ative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
1. Introduction
It is well know n that Pontryagin maximum principle plays a central role in optimal
control theory. In 1960, Pontryagin derived the maximum principle for optimal control
problems in finite dimensional spaces (see [1]). Since then, the maximum principle for
optimal control problems involving first-order nonlinear impulsive differential equations
in finite (or infinite) dimensional spaces has been extensively studied (see [2–10]). How-
ever, there are a few papers addressing the existence of optimal controls for the systems
governed by the second-order nonlinear impulsive differential equations. By reducing
wave equation to the customary vector form, Fattorini obtained the maximum principle
for time optimal control problem of the semilinear wave e quations (see [6, Chapter 6]).
Recently, Peng and Xiang [11, 12] applied the semigroup theory to establish the existence
of optimal controls for a class of second-order nonlinear differential equations in infinite
dimensional spaces.
Let Y beareflexiveBanachspacefromwhichthecontrolsu take the values. We denote


a class of nonempty closed and convex subsets of Y by P
f
(Y). Assume that the multifunc-
tion ω :
I = [0,T] → P
f
(Y) is measurable and ω(·) ⊂ E where E is a b ounded set of Y,
the admissible control set U
ad
={u ∈ L
p
([0,T],Y) | u(t) ∈ ω(t)a.e}. U
ad
=∅(see [13,
Page 142 Proposition 1.7 and Page 174 Lemma 3.2]). In this paper, we develop a direct
2AdvancesinDifference Equations
technique to derive the maximum principle for a Lagrange problem of systems governed
by a class of the second-order nonlinear impulsive differential equation in infinite di-
mensional spaces. Consider the following second-order nonlinear impulsive differential
equations:
¨
x(t)
= A
˙
x(t)+ f

t,x(t),
˙
x(t)


+ B(t)u(t), t ∈ (0,T] \ Θ,
x(0)
= x
0

l
x

t
i

=
J
0
i

x

t
i

, t
i
∈ Θ, i = 1,2, ,n,
˙
x(0)
= x
1

l

˙
x

t
i

=
J
1
i

˙
x

t
i

, t
i
∈ Θ, i = 1,2, ,n,
(1.1)
where the A is the infinitesimal generator of a C
0
-semigroup in a B anach space X, Θ =
{
t
i
∈ I | 0 = t
0
<t

1
< ··· <t
n
<t
n+1
= T}, J
0
i
, J
1
i
(i = 1,2, ,n) are nonlinear maps, and
Δ
l
x(t
i
) = x(t
i
+0)− x(t
i
), Δ
l
˙
x(t
i
) =
˙
x(t
i
+0)−

˙
x(t
i
). We denote the jump in the state x,
˙
x
at time t
i
, respectively, with J
0
i
, J
1
i
determining the size of the jump at time t
i
.
As a first step, we use the semigroup
{S(t), t ≥ 0} generated by A to construct the
semigroup generated by the operator matr ix
A (see Lemma 2.2). Then, the existence and
uniqueness of PC
l
-mild solution for (1.1) are proved. Next, we consider a Lagrange prob-
lem of system governed by (1.1) and prove the existence of optimal controls. In order to
derive the optimality conditions for the system (1.1), we consider the associated adjoint
equation and convert it to a first-order backward impulsive integro-differential equa-
tion with unbounded impulsive conditions. We note that the resulting integro-differential
equation cannot be turned into the original problem by simple transformation s
= T − t

(see (4.9)). Subsequently, we introduce a suitable mild solution for adjoint equation and
give a generalized backward Gronwall inequality to find a priori estimate on the solution
of adjoint equation. Finally, we make use of Yosida approximation to derive the optimal-
ity conditions.
The paper is organized as follows. In Section 2, we give associated notations and pre-
liminaries. In Section 3, the mild solution of second-order nonlinear impulsive differen-
tial equations is introduced and the existence result is also presented. In addition, the
existence of optimal controls for a Lagrange problem (P)isgiven.InSection 4, we dis-
cuss corresponding the adjoint equation and directly derive the necessary conditions by
the calculus of variations and the Yosida approximation. At last, an example is given for
demonstration.
2. Preliminaries
In this section, we give some basic notations and preliminaries. We present some ba-
sic notations and terminologies. Let £(X) be the class of (not necessary bounded) linear
operators in Banach space X.£
b
(X) stands for the family of bounded linear operators
in X.ForA
∈ £(X), let ρ(A) denote the resolvent set and R(λ,A) the resolvent corre-
sponding to λ
∈ ρ(A). Define PC
l
(I,X)(PC
r
(I,X)) ={x : I → X | x is continuous at t ∈
I \
Θ,x is continuous from left (r ight) and has right- (left-) hand limits at t
i
∈ Θ}. PC
1

l
(I,
X)
={x ∈ PC
l
(I,X) |
˙
x
∈ PC
l
(I,X)}, PC
1
r
(I,X) ={x ∈ PC
r
(I,X) |
˙
x
∈ PC
r
(I,X)}.Set
x
PC
= max

sup
t∈I


x(t +0)



,sup
t∈I


x(t − 0)



, x
PC
1
=x
PC
+ 
˙
x

PC
. (2.1)
Y. Pe n g et al. 3
It can be seen that endowed with the norm
·
PC
(·
PC
1
)PC
l

(I,X)(PC
1
l
(I,X)) and
PC
r
(I,X)(PC
1
r
(I,X)) are Banach spaces.
In order to construct the C
0
-semigroup generated by A, we need the following lemma
([14, Theorem 5.2.2]).
Lemma 2.1. Let A be a densely defined linear operator in X with ρ(A)
=∅.ThentheCauchy
problem
˙
x(t)
= Ax(t), t>0,
x(0)
= x
0
(2.2)
has a unique classical solution for each x
0
∈ D(A) if, and only if, A is the infinitesimal gen-
erator of a C
0
-semigroup {S(t), t ≥ 0} in X.

In the following lemma we construct the C
0
-semigroup generated by A.
Lemma 2.2 [12, Lemma 1]. Suppose A is the infinitesimal generator of a C
0
-semigroup
{S(t),t ≥ 0} on X. Then A = (
0 I
0 A
) is the infinitesimal generator of a C
0
-semigroup {S(t),t ≥
0} on X × X,givenby
S(t) =

I

t
0
S(τ)dτ
0 S(t)

. (2.3)
Proof. Obviously,
A isadenselydefinedlinearoperatorinX × X with ρ(A) =∅accord-
ing to assumption.
Consider the following initial value problem:
¨
x(t)
= A

˙
x(t), t ∈ (0,T], x(0) = x
0
,
˙
x(0) = x
1
∈ D(A). (2.4)
It is to see that the classical solution of (2.4)canbegivenby
x(t)
= x
0
+

t
0
S(τ)x
1
dτ,
˙
x(t) = S(t)x
1
. (2.5)
Setting v
0
(t) = x(t), v
1
(t) =
˙
x(t), v(t)

= (
v
0
(t)
v
1
(t)
), v
0
= (
x
0
x
1
) ∈ D(A) = X × D(A), (2.4)
can be rewritten as
˙
v(t)
= Av(t), t ∈ (0,T], v(0) = v
0
∈ D(A), (2.6)
and (2.6) has a unique classical solution v given by
v(t)
=

I

t
0
S(τ)dτ

0 S(t)

v
0
. (2.7)
Using Lemma 2.1,
A generates a C
0
-semigroup {S(t),t ≥ 0}. 
In order to study the existence of optimal control and necessary conditions of optimal-
ity, we also need some important lemmas. For reader’s convenience, we state the following
results.
4AdvancesinDifference Equations
Lemma 2.3 [7 , Lemma 3.2]. Suppose A is the infinitesimal generator of a compact sem igroup
{S(t),t ≥ 0} in X.ThentheoperatorQ : L
p
([0,T],X) → C([0,T],X) with p>1 given by
(Qf)(t)
=

t
0
S(t − τ) f (τ)dτ (2.8)
is strongly continuous.
Lemma 2.4 [15, Lemma 1.1]. Let ϕ
∈ C([0,T],X) satisfy the following inequality:


ϕ(t)




a + b

t
0


ϕ(s)


ds+ c

t
0


ϕ
s


B
ds ∀t ∈ [0,t], (2.9)
where a,b,c
≥ 0 are c onstants, and ϕ
s

B
= sup
0≤τ≤s

ϕ(τ). Then


ϕ(t)



ae
(b+c)t
. (2.10)
3. Existence of optimal controls
In this section, we not only present the existence of PC
l
-mild solution of the controlled
system (1.1) but also give the existence of optimal controls of systems governed by (1.1).
We consider the following controlled system:
¨
x(t)
= A
˙
x(t)+ f

t,x(t),
˙
x(t)

+ B(t)u(t), t ∈ (0,T] \ Θ,
Δ
l
x


t
i

=
J
0
i

x

t
i

, Δ
l
˙
x

t
i

=
J
1
i

˙
x


t
i

, t
i
∈ Θ,
x(0)
= x
0
,
˙
x(0) = x
1
, u ∈ U
ad
,
(3.1)
and natural ly introduce its mild solution.
Definit ion 3.1. A function x
∈ PC
1
l
(I,X)issaidtobeaPC
l
-mild solution of the system
(3.1)ifx satisfies the following integral equation:
x(t)
= x
0
+


t
0
S(s)x
1
ds+

t
0

t
τ
S(s − τ)

f

τ,x(τ),
˙
x(τ)

+ B(τ)u(τ)

dsdτ
+

0<t
i
<t

J

0
i

x

t
i

+

t
t
i
S

s − t
i

J
1
i

˙
x

t
i

ds


.
(3.2)
For the forthcoming analysis, we need the following assumptions:
[B]: B
∈ L

(I,£(Y,X));
[F]: (1) f :
I × X × X → X is measurable in t ∈ I and locally Lipschitz continuous w ith
respect to last two variables, that is, for all x
1
,x
2
, y
1
, y
2
∈ X, satisfying x
1
,x
2
,y
1
,
y
2
≤ρ,wehave


f


t,x
1
, y
1


f

t,x
2
, y
2




L(ρ)



x
1
− x
2


+



y
1
− y
2



; (3.3)
(2) there exists a constant a>0suchthat


f (t,x, y)



a

1+x + y

∀x, y ∈ X; (3.4)
Y. Pe n g et al. 5
[J]: (1) J
0
i
(J
1
i
):X → X (i = 1,2, ,n) map bounded set of X to bounded set of X;
(2) There exist constants e
0

i
,e
1
i
≥ 0 such that maps J
0
i
,J
1
i
: X → X satisfy


J
0
i
(x) − J
0
i
(y)



e
0
i
x − y,


J

1
i
(x) − J
1
i
(y)



e
1
i
x − y∀x, y ∈ X (i = 1,2, ,n).
(3.5)
Similar to the proof of existence of mild solution for the first-order impulsive evolution
equation (see [16]), one can verify the basic existence result. Here, we have to deal with
space PC
1
l
(I,X)instead.
Theorem 3.2. Suppose that A is the infinitesimal generator of a C
0
-semigroup. Under as-
sumptions [B], [F], and [J](1), the syste m (3.1)hasauniquePC
l
-mild solution for every
u
∈ U
ad
.

Proof. Consider the map H given by
(Hx)(t) = x
0
+

t
0
S(s)x
1
ds+

t
0

t
τ
S(s − τ)

f

τ,x(τ),
˙
x(τ)

+ B(τ)u(τ)

dsdτ (3.6)
on
B


x
0
,x
1
,1

=

x ∈ C
1

0,T
1

,X

|


˙
x(t)
− x
1


+


x(t) − x
0




1, 0 ≤ t ≤ T
1

, (3.7)
where T
1
would be chosen. Using assumptions and properties of semigroup, we can show
that H is a contraction map and obtain local existence of mild solution for the following
differential equation without impulse:
¨
x(t)
= A
˙
x(t)+ f

t,x(t),
˙
x(t)

+ B(t)u(t), t ∈ (0,T],
x(0)
= x
0
,
˙
x(0) = x
1

, u ∈ U
ad
.
(3.8)
The global existence comes from a priori estimate of mild solution in space C
1
(I,X) which
can be proved by Gronwall lemma.
Step by step, the existence of PC
l
-mild solution of (3.1)canbederived. 
Let x
u
denote the PC
l
-mild solution of system (3.1) corresponding to the control u ∈
U
ad
, then we consider the Lagrange problem (P):
find u
0
∈ U
ad
such that
J

u
0



J(u), ∀u ∈ U
ad
, (3.9)
where
J(u)
=

T
0
l

t,x
u
(t),
˙
x
u
(t),u(t)

dt. (3.10)
Suppose that
[L]: (1) the functional l :
I × X × X × Y → R ∪ {∞} is Borel measurable;
(2) l(t,
·,·,·) is sequentially lower semicontinuous on X × Y for almost all t ∈ I;
(3) l(t,x, y,
·)isconvexonY for each (x, y) ∈ X × X and almost all t ∈ I;
6AdvancesinDifference Equations
(4) there exist constants b
≥ 0, c>0andϕ ∈ L

1
(I,R)suchthat
l(t,x, y,u)
≥ ϕ(t)+b


x + y

+ cu
p
Y
∀x, y ∈ X, u ∈ Y. (3.11)
Now we can give the following result on existence of the optimal controls for problem
(P).
Theorem 3.3. Suppose that A is the infinitesimal generator of a compact semigroup. Under
assumptions [F], [L], and [J](2), the problem (P) has a solution.
Proof. If inf
{J(u) | u ∈ U
ad
}=+∞, there is nothing to prove.
We assume that inf
{J(u) | u ∈ U
ad
}=m<+∞. By assumption [L], we have m>−∞.
By definition of infimum, there exists a sequence
{u
n
}⊂U
ad
such that J(u

n
) → m.
Since
{u
n
} is bounded in L
p
(I,Y), there exists a subsequence, relabeled as {u
n
},andu
0

L
p
(I,Y)suchthat
u
n
w
−→ u
0
in L
p
(I,Y). (3.12)
Since U
ad
is closed and convex, from the Mazur lemma, we have u
0
∈ U
ad
.

Suppose x
n
is the PC
l
-mild solution of (3.1) corresponding to u
n
(n = 0,1,2, ). Then
x
n
satisfies the fol lowing integral equation
x
n
(t) = x
0
+

t
0
S(s)x
1
ds+

t
0

t
τ
S(s − τ)

f


τ,x
n
(τ),
˙
x
n
(τ)

+ B(τ)u
n
(τ)

dsdτ
+

0<t
i
<t
J
0
i

x
n

t
i

+


0<t
i
<t

t
t
i
S

s − t
i

J
1
i

˙
x
n

t
i

ds.
(3.13)
Using the boundedness of
{u
n
} and Theorem 3.2, there exists a number ρ>0suchthat

x
n

PC
1
l
(I,X)
≤ ρ.
Define
η
n
(t) =

t
0

t
τ
S(s − τ)B(τ)u
n
(τ)dsdτ −

t
0

t
τ
S(s − τ)B(τ)u
0
(τ)dsdτ. (3.14)

According to Lemma 2.3,wehave
η
n
−→ 0inC(I,X)asu
n
w
−→ u
0
. (3.15)
By assumptions [F], [J](2), Theorem 3.2, and Gronwall lemma with impulse (see [17,
Lemma 1.7.1]), there exists a constant M>0suchthat


x
n
(t) − x
0
(t)


+


˙
x
n
(t) −
˙
x
0

(t)



M


η
n


C
1
(I,X)
, (3.16)
that is,
x
n
−→ x
0
in PC
1
l
(I,X)asn −→ ∞ . (3.17)
Y. Pe n g et al. 7
Since PC
1
l
(I,X)  L
1

(I,X), using the assumption [L] and Balder’s theorem (see [18]),
we can obtain
m
= lim
n→∞

T
0
l

t,x
n
(t),u
n
(t)

dt ≥

T
0
l

t,x
0
(t),u
0
(t)

dt = J


u
0


m. (3.18)
This means that J attains its minimum at u
0
∈ U
ad
. 
4. Necessary conditions of optimality
In this section, we present necessar y conditions of optimality for Lagrange problem (P).
Let (x
0
,u
0
) be an optimal pair.
[F

] f satisfies the assumptions [F], f is continuously Frechet differentiable at x
0
and
˙
x
0
, respectively, f
0
x
∈ L
1

(I,£(X)), f
0
˙
x
∈ L

(I,£(X)), f
0
x
(t
i
± 0) = f
0
x
(t
i
), f
0
˙
x
(t
i
± 0) = f
0
˙
x
(t
i
)
for t

i
∈ Θ,where f
0
x
(t) = f
x
(t,x
0
(t),
˙
x
0
(t)), f
0
˙
x
(t) = f
˙
x
(t,x
0
(t),
˙
x
0
(t)).
[L

] l is continuously Frechet differentiable on x,
˙

x and u, respectively, l
0
x
(·) ∈ L
1
(I,
X

), l
0
˙
x
(·) ∈ W
1,1
(I,X

), l
0
u
(·) ∈ L
1
(I,Y

), l
0
˙
x
(T) ∈ X

, l

0
˙
x
(t
i
± 0) = l
0
˙
x
(t
i
)fort
i
∈ Θ,where
l
0
x
(·) = l
x
(·,x
0
(·),
˙
x
0
(·),u
0
(·)), l
0
˙

x
(·) = l
˙
x
(·,x
0
(·),
˙
x
0
(·),u
0
(·)), l
0
u
(·) = l
u
(·,x
0
(·),
˙
x
0
(·),
u
0
(·)).
[J

] J

0
i
(J
1
i
) is continuously Frechet differentiable on x
0
(
˙
x
0
), and J
10∗
i
˙
x
(t
i
)D(A

) ⊆
D( A

), where J
00
ix
(t
i
) = J
0

ix
(x
0
(t
i
)), J
10
i
˙
x
(t
i
) = J
1
i
˙
x
(
˙
x
0
(t
i
)) (i = 1,2, ,n).
In order to derive a pr iori estimate on solution of adjoint equation, we need the fol-
lowing generalized backward Gronwall lemma.
Lemma 4.1. Let ϕ
∈ C(I,X

) satisfy the following inequality:



ϕ(t)


X

≤ a +b

T
t


ϕ(s)


X

ds+ c

T
t


ϕ
s


B
0

ds ∀t ∈ I, (4.1)
where a,b,c
≥ 0 are c onstants, and ϕ
s

B
0
= sup
s≤τ≤T
ϕ(τ)
X

. Then


ϕ(t)


X

≤ aexp

(b + c)(T − t)

. (4.2)
Proof. Setting ϕ(T
− t) = ψ(t)fort ∈ I, ψ
t

B

= sup
0≤τ≤t
ϕ(τ)
X

,wehave


ψ(t)


X

≤ a + b

t
0


ψ(s)


X

ds+ c

t
0



ψ
s


B
ds. (4.3)
Using Lemma 2.4,weobtain


ψ(t)


X

≤ aexp

(b + c)t

; (4.4)
further,


ϕ(t)


X

≤ aexp

(b + c)(T − t)


. (4.5)
The proof is completed.

8AdvancesinDifference Equations
Let X be a reflexive Banach space, let A

be the adjoint operator of A,andlet{S

(t),t ≥
0} be the a djoint semigroup of {S(t),t ≥ 0}.ItisaC
0
-semigroup and its generator is just
A

(see [14, Theorem 2.4.4]).
We consider the following adjoint equation:
ϕ

(t) =−

A

ϕ(t)




f
0∗

˙
x
(t)ϕ(t)


+ f
0∗
x
(t)ϕ(t)+l
0
x
(t) − l
0

˙
x
(t), t ∈ [0,T) \ Θ,
ϕ(T)
= 0, Δ
r
ϕ

t
i

=
J
10∗
i
˙

x

t
i

ϕ

t
i

, t
i
∈ Θ,
ϕ

(T) =−l
0
˙
x
(T), Δ
r
ϕ


t
i

=
G
i


ϕ

t
i




t
i

, t
i
∈ Θ,
(4.6)
where
G
i

ϕ

t
i




t
i


=

J
00∗
ix

t
i

A

+ f
0∗
˙
x

t
i



A

+ f
0∗
˙
x

t

i

J
10∗
i
˙
x

t
i

ϕ

t
i

+J
00∗
ix

t
i

ϕ


t
i

+ J

00∗
ix

t
i

l
0
˙
x

t
i

.
(4.7)
A function ϕ
∈ PC
1
r
(I,X

)

PC
r
(I,D(A

)) is said to be a PC
r

-mild solution of (4.6)
if ϕ is given by
ϕ(t)
=

T
t
S

(τ − t)


T
τ

f
0∗
x
(s)ϕ(s) − l
0
x
(s)+l
0

˙
x
(s)

ds+ f
0∗

˙
x
(τ)ϕ(τ)+l
0
˙
x
(T)


+

t
i
>t
S


t
i
− t

J
10∗
i
˙
x

t
i


ϕ

t
i

+

t
i
>t

t
i
t
S

(τ − t)G
i

ϕ

t
i




t
i


dτ.
(4.8)
Lemma 4.2. Assume that X is a reflexive Banach space. Under the assumptions [F

], [L

],
[J

], the evolution (4.6)hasauniquePC
r
-mild solution ϕ ∈ PC
1
r
(I,X

).
Proof. Consider the following equation:
ϕ

(t)+

A

+ f
0∗
˙
x
(t)


ϕ(t)+

T
t

f
0∗
x
(s)ϕ(s)+l
0
x
(s) − l
0

˙
x
(s)

ds
=

t
i
>t
G
i

ϕ

t

i




t
i


l
0
˙
x
(T), t ∈ I \ Θ,
ϕ(T)
= 0, Δ
r
ϕ

t
i

=
J
10∗
i
˙
x

t

i

ϕ

t
i

, t
i
∈ Θ.
(4.9)
Equation (4.9) is a linear impulsive integro-differential equation. Setting t
= T − s, ψ(s) =
ϕ(T − s), (4.9)canberewrittenas
ψ

(s) =

A

+ f
0∗
˙
x
(T − s)

ψ(s)+F(s)+

s
i

<s
g
i

ψ

s
i




s
i

, s ∈ [0,T) \ Λ,
ψ(0)
= 0, Δ
l
ψ

s
i

=
J
10∗
i
˙
x


t
i

ψ

s
i

, s
i
∈ Λ =

s
i
= T − t
i
| t
i
∈ Θ

,
(4.10)
Y. Pe n g et al. 9
where
g
i

ψ


s
i




s
i

=

A

+ f
0∗
˙
x

t
i

J
10∗
i
˙
x

t
i



J
00∗
ix

t
i

A

+ f
0∗
˙
x

t
i

ψ

s
i

+ J
00∗
ix

t
i


ψ


t
i


J
00∗
ix

t
i

l
0
˙
x

t
i

,
F(s)
=

T
T
−s


f
0∗
x
(θ)ψ(T − θ)+l
0
x
(θ) − l
0

˙
x
(θ)

dθ +l
0
˙
x
(T).
(4.11)
Obviously, if ϕ is the classical solution of (4.9), then it must be the PC
r
-mild solution
of (4.6). Now we show that (4.9) has a unique classical solution ϕ
∈ PC
1
(I,X

)

PC(I,

D( A

)).
For s
∈ [0,s
n
], prove that the following equation:
ψ

(s) = A

ψ(s)+ f
0∗
˙
x
(T − s)ψ(s)+F(s),
ψ(0)
= 0,
(4.12)
has a unique classical solution ψ
∈ C
1
([0,s
n
],X

)

C([0,s
n

],D(A

)) given by
ψ(s)
=

s
0
S

(s − τ)

f
0∗
˙
x
(T − τ)ψ(τ)+F(τ)

dτ. (4.13)
By following the same procedure as in [16, Theorem 4.A], one can verify that (4.12)
has a unique mild solution ψ
∈ C([0,s
n
],X

) given by expression (4.13).
By the definition of F,itiseasytoseethatF
∈ L
1
([0,s

n
],X

)

C((0,s
n
),X

). Using
(4.13) and the basic properties of C
0
-semigroup, we obtain ψ(s) ∈ D(A

)fors ∈ [0, s
n
]
and
ψ

(s) = f
0∗
˙
x
(T − s)ψ(s)+F(s)+A


s
0
S


(s − τ)

f
0∗
˙
x
(T − τ)ψ(τ)+F(τ)

dτ. (4.14)
This implies ψ
∈ C
1
((0,s
n
),X

)andψ

(s
n
−) = ψ

(s
n
). Using [14, Theorem 5.2.13], (4.12)
has a unique classical solution ψ
∈ C
1
((0,s

n
),X

)

C([0,s
n
],D(X

)) given by the expres-
sion (4.13). In addition, the expressions (4.13)and(4.12)implyψ(0)
= 0, ψ

(0) = l
0
˙
x
(T),
and ψ(s
n
− 0), ψ

(s
n
− 0) exist. Furthermore, ψ ∈ C
1
([0,s
n
],X


)

C([0,s
n
],D(A

)).
By assumption [J

], we have
ψ
0
n
= ψ

s
n

+ J
10∗
n
˙
x

t
n

ψ

s

n


D( A

), ψ
1
n
= ψ


s
n

+ g
n

ψ

s
n




s
n


X


.
(4.15)
For s
∈ (s
n
,s
n−1
], consider the following equation:
ψ

(s) =

A

+ f
0∗
˙
x
(T − s)

ψ(s)+

T−s
n
T−s

f
0∗
x

(θ)ψ(T − θ)+l
0
x
(θ) − l
0

˙
x
(θ)

dθ +ψ
1
n
,
ψ

s
n
+

=
ψ
0
n
,
(4.16)
10 Advances in Difference Equations
that is, study the following equation:
ψ


(s) =

A

+ f
0∗
˙
x
(T − s)

ψ(s)+F(s)+g
n

ψ

s
n




s
n

,
ψ

s
n
+


=
ψ
0
n
.
(4.17)
By following the same procedure as on time interval [0,s
n
], it has a unique classical solu-
tion given by
ψ(s)
= S


s − s
n

ψ
0
n
+

s
s
n
S

(s − τ)


f
0∗
˙
x
(T − τ)ψ(τ)+F(τ)+g
n

ψ

s
n




s
n

dτ.
(4.18)
In general, for s
∈ (s
i
,s
i−1
](i = 0, 1, ,n), consider the following equation:
ψ

(s) =


A

+ f
0∗
˙
x
(T − s)

ψ(s)+F(s)+g
i

ψ

s
i




s
i

,
ψ

s
i

=
ψ


s
i

+ J
10∗
i
˙
x

t
i

ψ

s
i


D

A


.
(4.19)
It has a unique classical solution given by
ψ(s)
= S



s − s
i

ψ
0
i
+

s
s
i
S

(s − τ)

f
0∗
˙
x
(T − τ)ψ(τ)+F(τ)+g
i

ψ

s
i





s
i

dτ.
(4.20)
Repeating the procedure till the time interval which is expanded, and combining all of
the solutions on [t
i
,t
i+1
](i = 0, 1, ,n), we obtain classical solution of (4.10)givenby
ψ(s)
=

s
0
S

(s − τ)

f
0∗
˙
x
(T − τ)ψ(τ)+F(τ)


+


0<s
i
<s

S


s − s
i

J
10∗
i
˙
x

t
i

ψ

s
i

+

s
s
i
S


(s − τ)g
i

ψ

s
i




s
i



.
(4.21)
Further , (4.9) has a unique classical solution ϕ
∈ PC
1
(I,X

)

PC(I,D(A

)) given by
(4.8).


Using the assumption [F

], [3, Corollary 3.2], and [2,Theorem2],{A

(t) = A

+
f
0∗
˙
x
(t) | t ∈ I} generates a strongly continuous evolution operator U

(t,s), 0 ≤ s ≤ t ≤ T.
For simplicity, we have the following result.
Remark 4.3. The PC-mild solution ϕ of (4.6)canberewrittenas
ϕ(t)
=

T
t
U

(τ,t)


T
τ


f
0∗
x
(s)ϕ(s)+l
0
x
(s) − l
0

˙
x
(s)

ds+ l
0
˙
x
(T)


+

t
i
>t
U


t
i

,t

J
10∗
i
˙
x

t
i

ϕ

t
i

+

t
i
>t

t
i
t
U

(τ,t)G
i


ϕ

t
i




t
i

dτ.
(4.22)
Now we can give the necessary conditions of optimality for Lagrange problem (P).
Y. Pe n g et al. 11
Theorem 4.4. Suppose both X and Y be reflexive Banach spaces. Under the assumption of
Theorem 3.2 and assumptions [B], [F

], [L

], and [ J

], then, in order that the pair {x
0
,u
0
}
be optimal, it is necessary that there exists a function ϕ ∈ PC
1
r

(I,X

)

PC
r
(I,D(A

)) such
that the following evolution equations and inequality hold:
¨
x
0
(t) = A
˙
x
0
(t)+ f

t,x
0
(t),
˙
x
0
(t)

+ B(t)u
0
(t), t ∈ (0,T] \ Θ,

x
0
(0) = x
0

l
x
0

t
i

=
J
0
i

x
0

t
i

, t
i
∈ Θ,
˙
x
0
(0) = x

1

l
˙
x
0

t
i

=
J
1
i

˙
x
0

t
i

, t
i
∈ Θ;
(4.23)
ϕ

(t) =−


A

ϕ(t)


+ f
0∗
x
(t)ϕ(t) −

f
0∗
˙
x
(t)ϕ(t)


+ l
0
x
(t) − l
0

˙
x
(t), t ∈ [0,T) \ Θ,
ϕ(T)
= 0,Δ
r
ϕ


t
i

=
J
10
i
˙
x

t
i

ϕ

t
i

, t
i
∈ Θ,
ϕ

(T) = l
0
˙
x
(T),Δ
r

ϕ


t
i

=
G
i

ϕ

t
i




t
i

, t
i
∈ Θ;
(4.24)

T
0

l

0
u
(t)+B

(t)ϕ(t),u(t) − u
0
(t)

Y

,Y
dt ≥ 0, ∀u ∈ U
ad
.
(4.25)
Proof. Since (x
0
,u
0
) ∈ PC
1
l
(I,X) × U
ad
is an optimal pair, it must satisfy (4.23).
Since U
ad
is convex, it is clear that u
ε
= u

0
+ ε(u − u
0
) ∈ U
ad
for ε ∈ [0,1], u ∈ U
ad
.Let
x
ε
denote the PC
l
-mild solution of (3.1) corresponding to the control u
ε
. Using assump-
tion [J

], J is Gateaux differentiable, and the G-derivative of J at u
0
in the direction u − u
0
can be given by
lim
ε→0
J

u
ε



J

u
0

ε
=

T
0

l
0
x
(t), y(t)

X

,X
dt +

T
0

l
0
˙
x
(t),
˙

y(t)

X

,X
dt +

T
0

l
0
u
(t),u(t) − u
0
(t)

Y

,Y
dt
=

T
0

l
0
x
(t) − l

0

˙
x
(t), y(t)

X

,X
dt +

T
0

l
0
u
(t),u(t) − u
0
(t)

Y

,Y
dt
+

l
0
˙

x
(T), y(T)

X

,X


l
0
˙
x
(0), y(0)

X

,X

n

i=1

l
0
˙
x

t
i



l
y

t
i

X

,X
,
(4.26)
where the process y
∈ PC
1
l
(I,X) is the Gateaux derivative of solution x at u
0
in the direc-
tion u
− u
0
which satisfies the following equation:
¨
y(t)
=

A + f
0
˙

x
(t)

˙
y(t)+ f
0
x
(t)y(t)+B(t)

u(t) − u
0
(t)

, t ∈ (0,T] \ Θ,
y(0)
= 0, Δ
l
y

t
i

=
J
00
ix

t
i


y

t
i

, t
i
∈ Θ,
˙
y(0)
= 0, Δ
l
˙
y

t
i

=
J
10
i
˙
x

t
i

˙
y


t
i

, t
i
∈ Θ.
(4.27)
This is usually known as the variational e quation. By following the same procedure as in
Theorem 3.2, one can easily establish that (4.27)hasauniquePC
l
-mild solution y given
12 Advances in Difference Equations
by
y(t)
=

t
0

t−τ
0
S(ν)

f
0
x
(τ)y(τ)+ f
0
˙

x
(τ)
˙
y(τ)+B(τ)

u(τ) − u
0
(τ)

dν dτ
+

0<t
i
<t

J
00
ix

t
i

y

t
i

+


t
t
i
S

ν − t
i

J
10
i
˙
x

t
i

˙
y

t
i



.
(4.28)
Since u
0
is the optimal control, we have the following inequality :


T
0

l
0
x
(t) − l
0

˙
x
(t), y(t)

X

,X
dt +

T
0

l
0
u
(t),u(t) − u
0
(t)

Y


,Y
dt
+

l
0
˙
x
(T), y(T)

X

,X

n

i=1

l
0
˙
x

t
i


l
y


t
i

X

,X
≥ 0.
(4.29)
Due to the reflexivity of Banach space X, we have the Yosida approximation λ
k
R(λ
k
,
A

) → I

as λ
k
→∞,whereR(λ
k
,A

)istheresolventofA

for λ
k
∈ ρ(A


)andI

stands
for the identity operator in X

. Consider the Yosida approximation of f
0∗
x
, f
0∗
˙
x
, l
0
x
, l
0

˙
x
,
l
0
˙
x
(T), J
00∗
i
˙
x

(t
i
), J
10∗
ix
(t
i
)givenby
f
k∗
x
(·) = λ
k
R

λ
k
,A


f
0∗
x
(·), l
k
x
(·) = λ
k
R


λ
k
,A


l
0
x
(·), l

k
˙
x
(·) = λ
k
R

λ
k
,A


l
0

˙
x
(·),
J
k∗

ix

t
i

=
λ
k
R

λ
k
,A


J
00∗
ix

t
i

, J
k∗
i
˙
x

t
i


=
λ
k
R

λ
k
,A


J
10∗
i
˙
x

t
i

, l
k
˙
x
(T)= λ
k
R

λ
k

,A


l
0
˙
x
(T),
(4.30)
which take values in D(A

).
Consider the following evolution equation:
ϕ

k
(t) =−

A

(t)ϕ
k
(t)


+ f
k∗
x
(t)ϕ
k

(t)+l
k
x
(t) − l
k

˙
x
(t), t ∈ [0,T) \ Θ,
ϕ
k
(T) = 0, Δ
r
ϕ
k

t
i

=
J
k∗
i
˙
x

t
i

ϕ

k

t
i

, t
i
∈ Θ,
ϕ

k
(T) = l
k
˙
x
(T), Δ
r
ϕ

k

t
i

=
G
k
i

ϕ

k

t
i



k

t
i

, t
i
∈ Θ,
(4.31)
where
G
k
i

ϕ
k

t
i



k


t
i

=

J
k∗
x

t
i

A


t
i


A


t
i

J
1k∗
i
˙

x

t
i

ϕ
k

t
i

+ J
k∗
ix

t
i

ϕ

k

t
i

+ J
k∗
ix

t

i

l
k
˙
x

t
i

.
(4.32)
Similar to the proof of Lemma 4.2, one can show that (4.31) has a unique class solution
ϕ
k
given by
ϕ
k
(t) =

T
t
U

(τ,t)


T
τ


f
k∗
x
(s)ϕ
k
(s)+l
k
x
(s) − l
k

˙
x
(s)

ds+ l
k
˙
x
(T)


+

t
i
>t

U



t
i
,t

J
k∗
i
˙
x

t
i

ϕ
k

t
i

+

t
i
t
U

(τ,t)G
k
i


ϕ
k

t
i



k

t
i



.
(4.33)
Y. Pe n g et al. 13
Next, show that
ϕ
k
−→ ϕ in PC
1
r

I
,X



as λ
k
−→ ∞ . (4.34)
Employing the method of proof for Lemma 4.2, there exists a number M
0
> 0suchthat
ϕ
PC
1
(I,X

)
,


ϕ
k


PC
1
(I,X

)
≤ M
0
(k = 1,2, ). (4.35)
Setting
F
k

(t) =

T
t

f
k∗
x
(s)ϕ
k
(s)+l
k
x
(s) − l
k

˙
x
(s)

ds+ l
k
˙
x
(T)(k = 0,1,···),
a
k
=



l
k
x
− l

k
˙
x
− l
0
x
+ l
0

˙
x


L
1
(I,X

)
+


l
k
˙
x

(T) − l
0
˙
x
(T)


X

+ M
0


f
k∗
x
− f
0∗
x


L
1
(I,£(X

))
,
(4.36)
it follows that



F
k
(t) − F
0
(t)


X

≤ a
k
+


f
0∗
x


L
1
(I,£(X

))



k
)

t
− ϕ
t


B
0
+


f
0∗
x


L
1
(I,£(X

))


ϕ
k
(t) − ϕ(t)


X

.

(4.37)
For t
∈ [t
n
,T], we have


ϕ
k
(t) − ϕ(t)


X

≤ αTa
k
+ αθ

T
t


ϕ
k
(τ) − ϕ(τ)


X

dτ + αθ


T
t



ϕ
k

τ
− ϕ
τ


B
0
dτ,
(4.38)
where θ
=f
0∗
x

L
1
([0,T],£(X

))
, α = sup{U


(t,s)
£(X

)
| 0 ≤ s ≤ t ≤ T}.ByLemma 4.1,
we obtain


ϕ
k
(t) − ϕ(t)


X

≤ a
k
αTe
2αTθ
. (4.39)
Further ,


ϕ

k
(t) − ϕ

(t)



X

≤ λa
k
e
2αTθ
, (4.40)
where ω
= sup
0≤t≤T
A

(t)
£(D(A

),X

)
, λ = (1 + T)(1 + αω +2αθ). Hence


ϕ
k
(t) − ϕ(t)


X

+



ϕ

k
(t) − ϕ

(t)


X

≤ 2λa
k
e
2αTθ
for t ∈

t
n
,T

. (4.41)
14 Advances in Difference Equations
Using (4.8), (4.33), and (4.41), we have


ϕ
k


t
n
− 0


ϕ

t
n
− 0



X

≤ h
k
≡ b
k
+ c
k
+ λ(2δ +1)a
k
e
2αTθ
,


ϕ


k

t
n
− 0


ϕ


t
n
− 0



X

≤ h
k
,
(4.42)
where
b
k
= M
0
(ω +1)
n


i=1

2


J
k∗
ix

t
i


J
00∗
ix

t
i



£(X

)
+


J
k∗

i
˙
x

t
i


J
10
i
˙
x

t
i



£(X

)

,
c
k
=
n

i=1



J
k∗
ix

t
i

l
k
˙
x

t
n


J
00∗
ix

t
i

l
0
˙
x


t
n



X

,
δ
= (ω +1)
n

i=1

2


J
00∗
ix

t
i



£(X

)
+



J
10∗
i
˙
x

t
i



£(X

)

.
(4.43)
Hence, for t
∈ (t
n−1
,t
n
), we also obtain


ϕ
k
(t) − ϕ(t)



X

+


ϕ

k
(t) − ϕ

(t)


X

≤ 2λ

a
k
+ h
k

e
2αTθ
. (4.44)
By the same procedure, there exists γ>0suchthat



ϕ
k
(t) − ϕ(t)


X

+


ϕ

k
(t) − ϕ

(t)


X

≤ γ

a
k
+ b
k
+ c
k

for t ∈ I. (4.45)

This proves that
ϕ
k
−→ ϕ in PC
1
r

I
,X


as λ
k
−→ ∞ . (4.46)
Define
η
k
=

T
0

ϕ(t) − ϕ
k
(t),B( t)

u(t) − u
0
(t)


X

,X
dt, (4.47)
and observe that η
k
→ 0ask →∞.Thus

T
0

ϕ(t),B(t)

u(t) − u
0
(t)

X

,X
dt
=

T
0

ϕ(t) − ϕ
k
(t),B( t)


u(t) − u
0
(t)

X

,X
dt+

T
0

ϕ
k
(t),B( t)

u(t) − u
0
(t)

X

,X
dt
= η
k
+

T
0


l
k
x
(t) −
˙
l
k
˙
x
(t), y(t)

X

,X
dt +

l
k
˙
x
(T), y(T)

X

,X

n

i=1


l
k
˙
x

t
i


l
y

t
i

X

,X
(4.48)
Y. Pe n g et al. 15
for λ
k
∈ ρ(A

) > 0. Taking the limit k →∞, we find that

T
0


ϕ(t),B(t)

u(t) − u
0
(t)

X

,X
dt
=

T
0

l
0
x
(t) −
˙
l
0
˙
x
(t), y(t)

X

,X
dt +


l
0
˙
x
(T), y(T)

X

,X

n

i=1

l
0
˙
x

t
i


l
y

t
i


X

,X
.
(4.49)
Further ,

T
0

l
u

t,x
0
(t),u
0
(t)

+ B

(t)ϕ(t),u(t) − u
0
(t)

Y

,Y
dt ≥ 0, ∀u ∈ U
ad

. (4.50)
Thus, we have proved all the necessary conditions of optimality given by (4.23)–(4.25).

At the end of this section, an example is given to illustrate our theory. Consider the
following problem:

2
∂t
2
x(t, y)
= Δ

∂t
x(t, y)+

x
2
(t, y)+1+





∂t
x(t, y)

2
+1+u(t, y), y ∈ Ω, t ∈ (0,1] \

1

3
,
2
3

,
x(0, y)
= 0, x

i
3
+0,y


x

i
3
− 0, y

=
x

i
3
, y

, i = 1,2, y ∈ Ω,

∂t

x(t, y)
|
t=0
= 0,

∂t
x(t, y)
|
t=i/3+0


∂t
x(t, y)
|
t=i/3−0
=

∂t
x(t, y)
|
t=i/3
, i = 1,2, y ∈ Ω,
x(t, y)
|
[0,1]×∂Ω
= 0,

∂t
x(t, y)
|

[0,1]×∂Ω
= 0,
(4.51)
with the cost function
J(u)
=

1
0

Ω


x(t, ξ)


2
dξ dt +

1
0

Ω





∂t
x(t, ξ)





2
dξ dt +

1
0

Ω


u(t,ξ)


2
dξ dt, (4.52)
where Ω
⊂ R
3
is bounded domain, ∂Ω ∈ C
3
.
For the problem (4.51), one can show the following theorem.
Theorem 4.5. In order that the pair
{x
0
,u
0

}∈PC
1
l
([0,1], L
2
(Ω)) × L
2
([0,1],L
2
(Ω)) be
optimal, it is necessary that there exists a ϕ
∈ PC
1
r
([0,1],L
2
(Ω)) such that the following
16 Advances in Difference Equations
evolut ion equations and inequality hold:

2
∂t
2
x
0
(t, y) = Δ

∂t
x
0

(t, y)+


x
0
(t, y)

2
+1+





∂t
x
0
(t, y)

2
+1+u
0
(t, y),
y
∈ Ω, t ∈ (0,1] \

1
3
,
2

3

,
x
0
(0, y) = 0, x
0

i
3
+0,y


x
0

i
3
− 0, y

=
x
0

i
3
, y

, i = 1,2, y ∈ Ω,


∂t
x
0
(t, y)
t=0
= 0,

∂t
x
0
(t, y)|
t=i/3+0


∂t
x
0
(t, y)|
t=i/3−0
=

∂t
x
0
(t, y)|
t=i/3
,
i
= 1,2, y ∈ Ω,
x

0
(t, y)|
[0,1]×∂Ω
= 0,

∂t
x
0
(t, y)|
[0,1]×Ω
= 0;

2
∂t
2
ϕ(t, y) =−

∂t

Δϕ(t, y)+
(∂/∂t)x
0
(t, y)ϕ(t, y)


(∂/∂t)x
0
(t, y)

2

+1

+
x
0
(t, y)ϕ(t, y)


x
0
(t, y)

2
+1
+2x
0
(t, y) −

2
∂t
2
x
0
(t, y), y ∈ Ω, t ∈ [0,1) \

1
3
,
2
3


,
ϕ

i
3
− 0, y


ϕ

i
3
+0,y

=
ϕ(t, y)|
t=i/3
, i = 1,2, y ∈ Ω,

∂t
ϕ(t, y)
|
t=i/3−0


∂t
ϕ(t, y)
|
t=i/3+0

=

∂t

ϕ(t, y)+2x
0
(t, y)



t=i/3
, i = 1,2, y ∈ Ω,
ϕ(1, y)
= 0,

∂t
ϕ(t, y)
|
t=1
= 2

∂t
x(t, y)
|
t=1
, y ∈ Ω,
ϕ(t, y)
|
[0,1]×∂Ω
= 0,


∂t
ϕ(t, y)
|
[0,1]×∂Ω
= 0;

1
0

Ω

2u
0
(t,ξ)+ϕ(t,ξ),u(t,ξ) − u
0
(t,ξ)

L
2
(Ω),L
2
(Ω)
dξ dt ≥ 0, ∀u ∈ U
ad
.
(4.53)
Acknowledgment
This work is supported by the National Science Foundation Of China under Grant no.
10661004 and the Science and Technology Committee of Guizhou Province under Grant

no. 20052001.
References
[1] L. S. Pontryagin, “The maximum principle in the theory of optimal processes,” in Proceedings of
the 1st International Congress of the IFAC on Automatic Control, Moscow, Russia, June-July 1960.
[2] N. U. Ahmed, “Optimal impulse control for impulsive systems in Banach spaces,” International
Journal of Differential Equations and Applications, vol. 1, no. 1, pp. 37–52, 2000.
[3] N. U. Ahmed, “Necessary conditions of optimality for impulsive systems on B anach spaces,”
Nonlinear Analysis, vol. 51, no. 3, pp. 409–424, 2002.
Y. Pe n g et al. 17
[4] A.G.Butkovski
˘
ı, “The maximum principle for optimum systems with distributed parameters,”
Avtomatika i Telemehanika, vol. 22, pp. 1288–1301, 1961 (Russian).
[5] A. I. Egorov, “The maximum principle in the theory of optimal regulation,” in Studies in Integro-
Differential Equat ions in Kirghizia, No. 1 (Russian), pp. 213–242, Izdat. Akad. Nauk Kirgiz. SSR,
Frunze, Russia, 1961.
[6] H. O. Fattorini, Infinite Dimensional Optimization and Control Theory, vol. 62 of Encyclopedia of
MathematicsandItsApplications, Cambridge University Press, Cambridge, UK, 1999.
[7] X. Li and J. Yong, Optimal Control Theory for Infinite Dimensional Systems, Systems & Control:
Foundations & Applications, Birkh
¨
auser, Boston, Mass, USA, 1995.
[8] W. Wei and X. Xiang, “Optimal control for a class of strongly nonlinear impulsive equations in
Banach spaces,” Nonlinear Analysis, vol. 63, no. 5–7, pp. e53–e63, 2005.
[9] X. Xiang and N. U. Ahmed, “Necessary conditions of optimality for differential inclusions on
Banach space,” Nonlinear Analysis, vol. 30, no. 8, pp. 5437–5445, 1997.
[10] X. Xiang, W. Wei, and Y. Jiang, “Strongly nonlinear impulsive system and necessary conditions
of optimality,” Dynamics of Continuous, Discrete & Impulsive Systems A, vol. 12, no. 6, pp. 811–
824, 2005.
[11] Y. Peng and X. Xiang, “Second order nonlinear impulsive evolution equations with time-varying

generating operators and optimal controls,” to appear in Optimization.
[12] Y. Peng and X. Xiang, “Necessary conditions of optimality for second order nonlinear evolution
equations on Banach spaces,” in Proceedings of the 4th International Conference on Impulsive and
Hyprid Dynamical Systems, pp. 433–437, Nanning, China, 2007.
[13] S. Hu and N. S. Papageorgiou, Handbook of Multivalued Analysis. Vol. I: Theory, vol. 419 of
MathematicsandItsApplications, Kluwer Academic Publishers, Dordrecht, The Netherlands,
1997.
[14] N. U. Ahmed, Semigroup Theory with Applications to Systems and Control, vol. 246 of Pitman
Research Notes in Mathematics Series, Longman Scientific & Technical, Harlow, UK; John Wiley
& Sons, New York, NY, USA, 1991.
[15] X. Xiang and H. Kuang, “Delay systems and optimal control,” Acta Mathematicae Applicatae
Sinica, vol. 16, no. 1, pp. 27–35, 2000.
[16] X. Xiang, Y. Peng, and W. Wei, “A general class of nonlinear impulsive integral differential
equations and optimal controls on Banach spaces,” Discrete and Continuous Dynamical Systems,
vol. 2005, supplement, pp. 911–919, 2005.
[17] T. Yang, Impulsive Control Theory, vol. 272 of Lecture Notes in Control and Information Sciences,
Springer, Berlin, Germany, 2001.
[18] E. J. Balder, “Necessary and sufficient conditions for L
1
-strong-weak lower semicontinuity of
integral functionals,” Nonlinear Analysis, vol. 11, no. 12, pp. 1399–1404, 1987.
Y. Peng: Department of Mathematics, Guizhou University, Guiyang, Guizhou 550025, China
Email address:
X. Xiang: Department of Mathematics, Guizhou University, Guiyang, Guizhou 550025, China
Email address:
W. Wei: Department of Mathematics, Guizhou University, Guiyang, Guizhou 550025, China
Email address:

×