Tải bản đầy đủ (.pdf) (21 trang)

DSpace at VNU: Predator-prey System with the Effect of Environmental Fluctuation

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (589.69 KB, 21 trang )

VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

Predator-prey System with the Effect of Environmental Fluctuation
Le Hong Lan*
Faculty of Basic Sciences, Hanoi University of Communications and Transport,
Lang Thuong, Dong Da, Hanoi, Vietnam
Received 18 July 2014
Revised 27 August 2014; Accepted 15 September 2014

Abstract: In this paper we study the trajectory behavior of Lotka - Volterra predator - prey
systems with periodic coefficients under telegraph noises. We describe the ω - limit set of the
solution, give sufficient conditions for the persistence and prove the existence of a Markov
periodic solution.
Keywords: Key words and phrases, Lotka-Volterra Equation, Predator - Prey, Telegraph noise, ω
- limit set, Markov periodic solution.

1. Introduction*
The Kolmogorov equation
 x ( t ) = x f t , x ( t ) , y ( t )

 y ( t ) = y g t , x ( t ) , y ( t ) 

with the functions f t , x ( t ) , y ( t )  ; g t , x ( t ) , y ( t ) periodic in t is a strong tool to describe the
evolution of prey-predator communities depending on the changing of seasons. There is a lot of work
dealing with the asymptotic behavior of such systems as the existence of periodic solutions, the
persistence... [1-4] In particular, the classical model for a system consisting of two species in preypredator relation
 x ( t ) = x ( t )  a ( t ) − b ( t ) x ( t ) − c ( t ) y ( t ) 

 y ( t ) = y ( t ) −
 d ( t ) + e ( t ) x ( t ) − f ( t ) y ( t ) 


(1.1)

with the periodic coefficients a; b; c; d; e; f is well investigated in [5-10], where x ( t ) (resp. y ( t ) ) is
the quantity of the prey (resp. of predator) at time t .

_______
*

Tel.: 84- 989060885
Email:

49


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

50

In almost of these works, one supposes that the communities develop under an environment
without random perturbation. However, it is clear that it is not the case in reality because in general,
annual seasonal living conditions of the communities are not the same. Therefore, it is important to
take into account not only in the changing of seasons but also in the fluctuation of stochastic factors,
which may have important consequences on the dynamics of the communities.
For the stochastic Lotka - Volterra equation, a systematic review has been given in [11-13]. In our
separate paper [14], we analyze the Lotka - Volterra predator-prey system with constant coefficients
under the telegraph noises, i.e., environmental variability causes the parameter switching between two
systems. Then we have described some parts of ω -set of solutions and show out the existence of a
stationary distribution.
In this paper, we want to consider predator-prey models under the influence of stochastic
fluctuation of environment and changing periodically of season as well. We describe completely the

omega limit set of the positive solutions of Equation (1.1) with the periodic coefficients under the
telegraph noises. Also, the existence of a Markov periodic solution that attracts the other solutions of
Equation (2.4), starting in » + × » + under certain conditions is proved.
The rest of the paper is divided into three sections. Section 2 details the model. Some properties of
the solution and the set of omega limit are shown in section 3. The last section is some simulations and
discussions.

2. Preliminary
Let (Ω, F , P ) be a complete probability space and {ξ (t ) : t ≥ 0} be a continuous-time Markov
chain defined on (Ω, F , P ) , whose state space is a two-element set M = {−, +} and whose generator is
given by
q
Q =  11
 q21

q12   −α
=
q22   β

α 

−β 

with α > 0 and β > 0 . It follows that, ϖ = ( p, q ) , the stationary distribution of {ξ ( t ) :t ≥ 0}

satisfying the system of equations
ϖ Q = 0

 p + q =1
is given by


β

P {ξ ( t ) = 1} =
 p = tlim
→ +∞
α+β


q = lim P {ξ ( t ) = 2} = β
t → +∞

α+β

(2.1)

Such a two-state Markov chain is commonly referred to as telegraph noise because of the nature of
its sample paths. The trajectory of {ξ t } is piecewise-constant, cadlag functions. Suppose that


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

0 = τ 0 < τ 1 < τ 2 < ... < τ n < ...

51

(2.2)

are its jump times. Put


σ 1 := τ 1 − τ 0 , σ 2 := τ 2 − τ 1 , ... , σ n := τ n − τ n −1

(2.3)

It is known that the sequence {σ k }k =1 is an independent random variables in the condition of
n

{ }

given sequence ξτ k

n
k =1

(see [15, 16]). Note that if ξ 0

is given then ξτ n is constant since the

process {ξ t } takes only two values. Hence, (σ k )∞k =1 is a sequence of conditionally independent random

variables, valued in [0, + ∞ ] . Moreover, if ξ0 = + then σ 2 n +1 has the exponential density α1[0, + ∞ ) e−α t
and σ 2 n +1 has the density β 1[0, + ∞ ) e − β t . Conversely, if ξ0 = − then σ 2n has the exponential density

α1[0, + ∞ ) e−α t and σ 2 n +1 has the density β 1[0, + ∞ ) e − β t (see [15]). Here
1 , t ≥ 0
.
1[0, + ∞ ) = 
0 , t < 0

Denote ℑ0n = σ (τ k , k ≤ n ) ; ℑ∞n = σ (τ k −τ n , k > n ) . We see that ℑ0n is independent of ℑ∞n for any

n ∈ » in the condition that ξ0 given.

Let ξ0

have the distribution Ρ {ξ0 = +} = p ; Ρ {ξ0 = −} = q then {ξt } is a stationary process.

Therefore, there exists a group θ t , t ∈ » of

P−

preserving measure transformations θ t : Ω → Ω

such that ξt (ω ) = ξ0 (θ t ω ) , ω ∈ Ω .
We consider the periodic predator-prey equation under a random environment. Suppose that the
quantity x of the prey and the quantity y of the predator are described by a Lotka - Volterra equation
 x = x  a (ξt , t ) − b (ξt , t ) x − c (ξt , t ) y 

 y = y  −d (ξt , t ) + e (ξt , t ) x − f (ξt , t ) y 

(2.4)

where g : Ε → » + for g = a, b, c, d , e, f such that g ( i,.) are continuous and periodic functions with
period T > 0 for any i ∈ Ε . Moreover, m ≤ g ( i , t ) ≤ M ; in which m and M are two positive
constants.
In case where the noise {ξt } intervenes virtually into Equation (2.4), it makes a switching between
the deterministic periodic system
 x+ ( t ) = x+ ( t )  a ( +, t ) − b ( +, t ) x+ ( t ) − c ( +, t ) y+ ( t ) 

 y+ ( t ) = y+ ( t )  −d ( +, t ) + e ( +, t ) x+ ( t ) − f ( +, t ) y+ ( t )


(2.5)

and another
 x− ( t ) = x− ( t )  a ( −, t ) − b ( −, t ) x− ( t ) − c ( −, t ) y− ( t ) 

 y− ( t ) = y− ( t )  −d ( −, t ) + e ( −, t ) x− ( t ) − f ( −, t ) y− ( t ) 

(2.6)


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

52

Thus, the relationship of these two systems will determine the trajectory behavior of Equation
(2.4).
System (2.4) without the noise {ξt } , i.e., g (ξt , t ) = g ( t ) for any g = a, b, ..., f is studied in [9].
They show that

Theorem 2.1. Consider the system
 x ( t ) = x ( t )  a ( t ) − b ( t ) x ( t ) − c ( t ) y ( t ) 

 y ( t ) = y ( t )  −d ( t ) + e ( t ) x ( t ) − f ( t ) y ( t )

(2.7)

where a, b,..., f are T-periodic functions.
a) If
a
d 

inf   > sup  
b
e

(2.8)

b
c
inf   > sup  
e
d 

(2.9)

then system (2.7) has a positive T-periodic solution ( x *( t ) , y *( t ) ) satisfying

( x ( t ) − x (t ) , y (t ) − y (t )) → ( 0,0) .
*

t →∞

*

(2.10)

b) If
d 
a
inf   > sup  
e

 
b

(2.11)

then the (unique) periodic solution u * ( t ) of the equation u ( t ) = u ( t )  a ( t ) − b ( t ) u ( t ) is stable and

( x ( t ) − u (t ) , y (t )) → ( 0,0 )
*

t →+∞

(2.12)

for any positive solution ( x ( t ) , y ( t ) ) to (2.7).

Figure 1. Coexistence of predator and prey.

Figure 2. Extinction of predators.


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

53

Lemma 2.2. Consider the system
 x ( t ) = f ( x, y , t )

 y ( t ) = g ( x, y, t )
where f , g : » 2 × [0, + ∞ ) → » 2 × [0, + ∞ ) are T-periodic functions in t.

Suppose that this system has a globally asymptotically stable T- periodic solution

( x (t ) , y (t )) := ( x (t ,0, z ) , y (t ,0, z )) ,
:= ( x , y ) is the initial point. Then, for every
*

where z0*

*
0

*

*
0

*
0

*
0

ε > 0 and a compact set K, we can find a

T * = T * ( ε , Κ ) > 0 such that for all t ≥ T * , s ≥ 0 , ( x0 , y0 ) ∈ Κ . we have
x ( t + s , s , x0 , y0 ) − x* ( t + s ) + y ( t + s , s , x0 , y0 ) − y * ( t + s ) ≤ ε

(2.13)

Proof. Since f , g are T − periodic, we can suppose that 0 ≤ s ≤ T . Moreover, it is easy to show

that if ( x0 , y0 ) ∈ Κ
and
0 ≤ s ≤ T , there is a compact set Κ ′ such that

( x ( T , s, x

0

, y0 ) , y (T , s, x0 , y0 ) ) ∈ Κ ′. Due to the periodicity of parameters, it is therefore sufficient to

verify (2.13) for s = 0 . Since ( x* ( t ) , y* ( t ) ) is stable, we can find a δ ε > 0 such that if
x − x0* + y − y0* ≤ δ ε then

x ( t ,0, x , y ) − x* ( t ) + y ( t ,0, x , y ) − y* ( t ) ≤ ε , ∀t ≥ 0

( 2.14 )

On the one hand, ( x* ( t ) , y * ( t ) ) is globally asymptotic then for every ( x0 , y0 ) ∈Κ , there exist a
T( x0 , y0 ) = k( x0 , y0 )T , k( x0 , y0 ) ∈ » such that

(

) (

)

(

)


(

)

x T( x0 , y0 ) ,0, x , y − x* T( x0 , y0 ) + y T( x0 , y0 ) ,0, x , y − y* T( x0 , y0 ) ≤ δ ε
By the continuous dependence of solutions on the initial data, there is a neighborhood of ( x0 , y0 ) ,
denoted by Vx0 , y0 , such that

(

) (

(

)

)

(

)

x T( x0 , y0 ) ,0, x , y − x* T( x , y ) + y T( x0 , y0 ) ,0, x , y − y * T( x0 , y0 ) ≤ δ ε , ∀ ( x , y ) ∈Vx0 , y0

( 2.15)

As a result of (2.14) and (2.15),

x ( t ,0, x , y ) − x* ( t ) + y ( t ,0, x , y ) − y * ( t ) ≤ δ ε , ∀ ( x , y ) ∈Vx0 , y0 , t ≥ T( x0 , y0 )


{

The family Vx0 , y0 : ( x0 , y0 ) ∈ Κ

finite family

{V

} is an open covering of

} such that Κ ⊂ ∪ V
n

x10 , y10

,..., Vxn , y n
0

0

i =1

x0i , y0i

point ( x0 , y0 ) ∈ Κ and for all t > T , we have:
*

x ( t ,0, x0 , y0 ) − x* ( t ) + y ( t ,0, x0 , y0 ) − y * ( t ) < ε .
The proof is complete.


( 2.16 )

Κ . Since Κ is compact then there is a

. By choosing T * = max T ( x0i , y0i ) , for any
1≤ i ≤ n


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

54

3. Dynamic behavior of the solution

( x0 , y0 ) ∈ » 2+ . Denote by ( x ( t ,0, x0 , y0 ) , y ( t ,0, x0 , y0 ) ) the solution of
initial condition ( x ( 0,0, x0 , y0 ) , y ( 0,0, x0 , y0 ) ) = ( x0 , y0 ) . For the sake of
write ( x ( t ) , y ( t ) ) for ( x ( t ,0, x0 , y0 ) , y ( t ,0, x0 , y0 ) ) if there is no confusion.
Let

(2.4) satisfying the
simplification, we

Proposition 3.1. The system (2.4) is dissipative and the rectangle ( 0, M / m ] × ( 0, M 2 / m 2 − 1 is
forward invariant.

Proof. By the uniqueness of the solution, it is easy to show that both the nonnegative and positive
cones of » 2+ are positively invariant for (2.4). From the first equation of system (2.4) we see that
x = x  a (ξt , t ) − b (ξt , t ) x − c (ξt , t ) y  < x  a (ξt , t ) − b (ξt , t ) x  < x ( M − mx ) .
By the comparison theorem, it follows that if x ( 0 ) ≥ 0 then x ( t ) ≤ M / m , ∀ t > t0 for some t0 > 0.
Similarity,


y = y −
 d (ξ t , t ) + e (ξ t , t ) x − f (ξt , t ) y  < y −
 d (ξ t , t ) + e (ξt , t ) M / m − f (ξ t , t ) y 
< y ( −m + M 2 / m − my ) ,
which follows that y ( t ) ≤ M 2 / m 2 − 1 , ∀ t > t1 for some t1 > t0 .
From these estimates, we also see that the rectangle ( 0, M / m ] × ( 0, M 2 / m 2 − 1 is forward
invariant. The proof is complete.

Proposition 3.2. There exists A δ 0 > 0 such that limsup x ( t ,0, x0 , y0 ) ≥ δ 0 for any ( x0 , y0 ) with
t → +∞

probability 1.

Proof. By the system (2.4), there exist δ 0 > 0, ε 0 > 0 such that
− d (ξt , t ) + e (ξt , t ) x − f (ξt , t ) y < −ε 0 ; ∀ 0 < x < δ 0 , 0 < y ≤ M 2 / m 2 − 1
and
a (ξ t , t ) − b (ξ t , t ) x − c (ξt , t ) y > ε 0 for all 0 ≤ x, y ≤ δ 0 .

( 3.1)

Assume that limsup x ( t ,0, x0 , y0 ) < δ 0 with a positive probability. Then, there is a t3 > 0 such that
t → +∞

x ( t ) < δ 0 , y ( t ) ≤ M 2 / m 2 − 1 ∀t ≥ t3 , which implies that y ( t ) < −ε 0 y ( t ) . Therefore, for some t4 > t3
, y ( t ) < δ 0 , ∀ t ≥ t4 . From (3.1) we see x ( t ) > ε 0 x ( t ) , ∀ t ≥ t4 , which follows that lim x ( t ) = ∞ .
t →+ ∞

This contradiction implies the assertion of this proposition.


Proposition 3.3. There exists a positive number xmin satisfying: if
t > 0 such that x ( t ,0, x0 , y0 ) ≥ xmin for all t ≥ t .

( x0 , y0 ) ∈ » 2+

we can find


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

55

Proof. With δ 0 mentioned in 3.2, there exists t > 0 such that x ( t ) > δ 0 . Let 0 < ε1 ≤ δ 0 such
that −δ1 := −m + M ε1 < 0 . If x ( t ) ≥ ε1 for all t > t then the proposition is proved. Otherwise, x ( t ) < ε1

{

}

for a t > t . Let h1 = inf s > t : x ( s ) < ε1 . We see that if x ( t ) ≤ ε 1 for t > h1 then
y = y −
 d (ξ t , t ) + e (ξ t , t ) x − f (ξ t , t ) y  ≤ y ( −m + M ε 1 ) = −δ1 y for all t ∈ ( h1 , h2 )
which implies that
y ( t ) ≤ y ( h1 ) e −δ1 (t − h1 ) ≤ ymax e −δ1 (t − h1 ) , ∀ t ∈ ( h1 , h2 )
Hence,

(

x = x  a (ξ t , t ) − b (ξ t , t ) x − c (ξt , t ) y  ≥ x m − M x − M ymax e


− δ1 ( t − h1 )

) , ∀t ∈(h , h ) .
1

2

Put
t

(

n ( t ) = ∫ m − M ymax e

−δ1 ( t − h1 )

t

) ds ; N (t ) = ∫ e ( ) ds

h1

n s

h1

By comparison theorem we get

x (t ) ≥


ε1 e n (t )
, ∀t ∈ ( h1 , h2 ) .
1 + ε M N (t )

ε1 e n (t )
Let α = min
> 0 . It is clear that α does not depend on ( x ( 0 ) , y ( 0 ) ) and h1 . Let
t > h 1 + ε M N (t )
1

xmin = min {α , ε1} we see that x ( t ) > xmin , ∀ t > t . The proof is complete.
As is known, the property of solutions of Lotka -Volterra systems near to boundary is dependent
of two marginal equations. In the case where the prey is absent, the quantity v(t ) of predator at the
time t satisfies the equation v = −d (ξ t , t ) v − f (ξ t , t ) v 2 . Thus, v(t ) decreases exponentially to 0.
Similarly, without the predator, the quantity u (t ) of the prey at the time t satisfies the logistic equation
u = u  a (ξt , t ) − b (ξ t , t ) u  , 0 < u ( 0 ) ∈ » +

(3.2)

If u (t ) is a solution of (3.2) then {ξ t , u ( t )} is Markov processes.
A random process {φt } , valued in a measurable space (S; S), is said to be periodic with period T if

(

for any t1 , t2 ,..., tn ∈ » , the simultaneous distribution of φ t1 + k T ,φ t2 + k T ,...,φ tn + k T

) does not depend on

k ∈» .
We show that Equation (3.2) has a unique solution u * ( t ) such that (ξt , u * ( t ) ) is a periodic

process. Indeed, put
u* ( t ) =

e
t

A( t )

∫ b (ξ , s ) e
s

−∞

A( s )

ds


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

56

t

where A ( t ) = ∫ a (ξ s , s ) ds . Firstly, we see that
0
t +T

∫ a ξ s (ω ), s  ds


e

u* (t + T ,ω ) =

0
s

t +T

∫ b ξ (ω ) , s  e

∫ a ξτ (ω ), τ  dτ
ds

0

s

−∞
t +T

e

=

T
∫ a ξs−T (θ ω ), s −T  ds
0
s


t +T

∫ b ξ (θ ω ) , s − T  e
T

s −T

T
∫ a ξτ −T (θ ω ), τ −T  dτ
0

ds

−∞

t

T
∫ a ξ s (θ ω ), s  ds

e −T

=

0

T
∫ a ξ s (θ ω ), s  ds

e −T


s

t

∫ b ξ (θ ω ) , s  e
T

T
∫ a ξτ (θ ω ), τ  dτ
0

s

ds

−∞
t

T
∫ a ξs (θ ω ), s  ds

e0

=

s

∫ a ξτ (θ ω ), τ  dτ
e

b ξ s (θ T ω ) , s  ds
∫ 0
t

T

= u* (t , θ T , ω ).

−∞

Hence, by virtue of P - preserving measure property of θ , for any continuous function h , for any
t1 < t2 < ... < tn ; k ∈ » we have

{

}

Ε h ξt1 + k T , u * ( t1 + k T ) , ξt2 + k T , u * ( t2 + k T ) ,..., ξ tn + k T , u * ( tn + k T )

{

}

= Ε h ξ t1 (θ kT ) , u * ( t1 , θ kT ) , ξ t2 (θ kT ) , u * ( t2 , θ kT ) ,..., ξtn (θ kT ) , u * ( tn , θ kT ) 

{

}

= Ε h ξ t1 (.) , u * ( t1 , .) , ξt2 (.) , u * ( t2 , .) ,..., ξ tn (.) , u * ( tn , .) .

This means that (ξt , u * ( t ) ) is a periodic process with period T . The uniqueness follows from the
following lemma:

Lemma 3.4. For any u0 > 0, lim u ( t ) − u * ( t )  = 0 a.s., where u ( t )
t→+ ∞

is the solution of the

equation (3.2) satisfying u ( 0 ) = u0 .

1 1

we have z = −a z . Thus, by virtue of the bounded below property by
u u*
positive constant of z we follow the result.

Proof. Put z =

Lemma 3.5. [Law of large numbers for periodic processes] For any continuous, bounded function
h ( t , i , u ) , periodic in t with period T we have


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

t
1
 1
lim
h  s , ξ s , u * ( s ) ds = Ε 
t →+ ∞ t ∫ 

 T
0

T

∫ h  s , ξ
0

s


, u * ( s ) ds 


57

(3.3)

Proof. Put
( n +1) T



Xn =

h  s , ξ s , u * ( s ) ds

nT

Since {ξt , u * ( t )} is periodic then { X n } is a stationary process. By the law of large numbers we have

lim

n →∞

1
n

n

∑X

k

= Ε [ X 0 / J ] a. s.,

k =0

where J is the σ − algebra of the invariant sets. However, (ξt ) is ergodic and u * ( t ) has no
non-trivial invariant set then we follow that J = {Φ, Ω} . This implies that
1
1 1 [ ]
1
*


h
s
ξ
u
s

ds
=
Xk =
Ε[ X 0 ]
,
,
lim
( )

s
t →+ ∞ t ∫ 
t →+∞ T t / T
T
[ ] k =0
0
t

t /T

lim
=

T

1 
Ε  ∫ h  s , ξ s , u * ( s ) ds 
T 0


Where, [ x ] denotes the integer number such that [ x ] ≤ x < [ x ] + 1 . Lemma is proved.

We study conditions that ensure the persistence of y (t ) of the Equation (2.4) with x(0) > 0 and
y (0) > 0 .

Proposition 3.6. Put
λ :=

T

1 
Ε  ∫  − d (ξt , t ) + e (ξt , t ) u * ( t ) dt 
T 0


(3.4)

a) If λ > 0 then limsup y ( t ) > δ > 0 with probability 1.
t→+ ∞

b) In case λ < 0, lim y ( t ) = 0 with probability 1.
t→+ ∞

Proof. By comparison theorem, if x ( 0 ) = u ( 0 ) we have u ( t ) ≥ x ( t ) , ∀t . Therefor, by virtue of
Lemma 3.4 we have liminf

ln u * ( t ) − ln x ( t )

≥ 0.

t


t→+∞

a) From Equations (3.2) and (2.4) we have
ln u * ( t ) − ln u * ( 0 )
t
ln x ( t ) − ln x ( 0 )
t
t

t

=

t

1
1
a (ξ s , s ) ds − ∫ b (ξ s , s ) u * ( s ) ds

t 0
t 0

t

=

(3.5)

t


1
1
a (ξ s , s ) ds − ∫ b (ξ s , s )  x ( s ) − u * ( s ) ds −

t 0
t 0
t

1
1
− ∫ b (ξ s , s ) u * ( s ) ds − ∫ c (ξ s , s ) y ( s ) d
t 0
t 0

(3.6)


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

58

On subtracting (3.6) from (3.5) we obtain
t
t
1
1

0 ≤ liminf  ∫ c (ξ s , s ) y ( s ) ds − ∫ b (ξ s , s ) u * ( s ) − x ( s ) ds 
t 0
t→+∞ 

t 0

t
t
1
1

≤ liminf  ∫ M y ( s ) ds − ∫ m u * ( s ) − x ( s )  ds 
t 0
t→+∞ 

t 0

Hence,
t
t
1
1 M

liminf  ∫
y ( s ) ds − ∫ u * ( s ) − x ( s )  ds  ≥ 0
t 0
t→+ ∞ 

t 0 m

Otherwise,

y (t )
y (t )


= − d (ξt , t ) + e (ξ t , t ) x ( t ) − f (ξt , t ) y ( t ) follows

ln y ( t ) − ln y ( 0 )
t

t

=

1
 − d (ξ s , s ) + e (ξ s , s ) u * ( s )  ds −

t 0

t



(3.7)

t

1
1
e (ξ s , s ) u * ( s ) − x ( s )  ds − ∫ f (ξ s , s ) y ( s ) ds

t 0
t 0


and
t

t

1
1
e (ξ s , s ) u * ( s ) − x ( s ) ds + ∫ f (ξ s , s ) y ( s ) ds =

t 0
t 0
=

t
ln y ( t ) − ln y ( 0 )
1
 − d (ξ s , s ) + e (ξ s , s ) u * ( s )  ds −

t 0
t

 ln y ( t ) − ln y ( 0 ) 
Moreover, y (t ) is bounded above then liminf −
 ≥ 0 and we apply the law
t
t→+∞ 

t
1
of large numbers (Lemma 3.5), lim ∫  − d (ξ s , s ) + e (ξ s , s ) u * ( s ) ds =λ , consequently,

t→+∞ t
0
t
1 t

1
liminf  ∫ e (ξ s , s ) u * ( s ) − x ( s ) ds + ∫ f (ξ s , s ) y ( s ) ds  =
t 0
t→+ ∞ 
t 0

t
ln y ( t ) − ln y ( 0 ) 
1
= lim inf  ∫  − d (ξ s , s ) + e (ξ s , s ) u * ( s ) ds −

t
t→+ ∞ 
t 0


1 t

 ln y ( t ) − ln y ( 0 ) 
≥ lim inf  ∫  − d (ξ s , s ) + e (ξ s , s ) u * ( s ) ds  + lim inf −
≥λ
t
t→+ ∞ 
 t → + ∞ 


t 0
Hence,
t
t
1
1

liminf  ∫ u * ( s ) − x ( s )  ds + ∫ y ( s ) ds  ≥
t 0
t→+ ∞ 

t 0


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

59

t
t
 λ
1 f (ξ s , s )
1 e (ξ s , s ) *


liminf  ∫
u ( s ) − x ( s )  ds + ∫
y ( s ) ds  ≥

t 0 M

t→+ ∞ 
 M
t 0 M

By (3.7) plus (3.8), we obtain
t

liminf
t→+ ∞

1 M

 + 1 y ( s ) ds ≥ liminf

t 0m
t→+ ∞


t
t
1
1 M

y ( s ) ds − ∫ u * ( s ) − x ( s )  ds  +
 ∫
t 0
 t 0 m


t

1 t
 λ
1
+ liminf  ∫ u * ( s ) − x ( s ) ds + ∫ y ( s ) ds  ≥
t 0
t→+ ∞ 
t 0
 M
t

then liminf
t→+ ∞

1
m
y ( s ) ds ≥
λ ≥ 0 and limsup y ( t ) > δ > 0.

t 0
M ( M + m)
t→+∞

b) From the second equality of systems (2.4) and λ > 0 we have

limsup

ln y ( t ) − ln y ( 0 )
t

t→+ ∞


1
= lim sup  ∫  − d (ξ s , s ) + e (ξ s , s ) u * ( s )  ds −
t→+∞ 
t 0
t



t
t

1
1
*


e
,
s
u
s
x
s
ds
f (ξ s , s ) y ( s ) ds 


ξ
(

)
(
)
(
)
s




t 0
t 0


t

≤ λ − limsup
t→+ ∞

t

1
1
e (ξ s , s ) u * ( s ) − x ( s )  ds − limsup ∫ f (ξ s , s ) y ( s ) ds < 0

t 0
t 0
t→+ ∞

which implies that lim y ( t ) = 0 . The proof is complete.

t→+∞

Remark 3.7. The conditions (3.4) is easily to be checked by simulation method based on the law
of large numbers. Moreover, by (ξt , u * ( t ) ) is solution of equation (3.2), we have

λ :=

T

1 
Ε  ∫  − d (ξt , t ) + e (ξt , t ) u * ( t )  dt 
T  0


=

T
 
e (ξ t , t )
1  
Ε  ∫  − d (ξt , t ) +
b (ξ t , t ) u * ( t ) dt 
T  0 
b (ξ t , t )
 



T
 

1  
 e ( ± , t ) 
Ε  ∫  − d (ξt , t ) + inf 
b (ξ t , t ) u * ( t ) dt 

t→+∞
T  0 
 b (ξ t , t ) 
 


=

T
 
1  
 e ( ± , t ) 
Ε  ∫  − d (ξt , t ) + inf 
a (ξt , t )  dt  +

t→+∞
T  0 
 b (ξ t , t ) 
 


T
 e ( ± , t )  1 

+ inf 

Ε  ∫  a (ξt , t ) − b (ξt , t ) u * ( t ) dt 

t→+∞

 b (ξ t , t )  T  0

Note that


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

60

T
1 

Ε  ∫  a (ξ t , t ) − b (ξ t , t ) u * ( t )  dt  = 0
T  0


 e ( ± , t ) 
and that − d ( ± , t ) + inf 
 a ( ± , t ) ≥ 0 , ∀t > 0
t→+∞
 b ( ± , t ) 
provided that
 a ( ± , t ) 
 d ( ± , t ) 
inf 
 > sup 


 b ( ± , t )  t → + ∞  e ( ± , t ) 

(3.9)

t→+∞

Then, λ > 0 under the condition 3.9, which is similar to (2.8).
From now on, we suppose that λ > 0 .

Lemma 3.8. With probability 1, there are infinitely many sn = sn (ω ) > 0 such that sn > sn −1 ,
lim sn = ∞ and x ( sn ) ≥ δ , y ( sn ) ≥ δ , ∀n ∈ » .

n →+ ∞

Proof. By Proposition 3.3 we can find t > 0 such that x ( t ) ≥ xmin , for all t > t . On the other
hand, there exists δ < xmin and a random sequences {sn } ↑ ∞ , sn > t such that y ( sn ) > δ , ∀ n ∈ » . The
proof is complete.
For the sake of simplicity, we suppose ξ 0 = + a.s and set xn := x (τ n , x , y ) , yn := y (τ n , x , y )
ℑ0n = σ (τ k , k ≤ n ) ; ℑ∞n = σ (τ k −τ n , k > n ) . It is clear that

( xn , yn ) is

ℑ0n measurable ℑ0n is

independent ℑ∞0 if ξ0 is given.
Hypotheses 3.9. On the quadrant int » 2+ , the system (2.5) has a stable positive T − periodic

solution ( x+* , y+* ) such that


( x (t ) − x (t ) , y (t ) − y (t )) → ( 0,0).
+

*
+

+

*
+

t →+∞

Lemma 3.10. Suppose that Hypothesis 3.9 holds and λ > 0 , we can find an ∆ > 0 such that with
probability 1, there are infinitely many n ∈ » such that ∆ ≤ xn , yn ≤ M * . Moreover, we can find ∆ > 0

{

such that the events x2 k +1 > ∆ , y2 k +1 > ∆

} as well as {x

2k

> ∆ , y2 k > ∆

} occur infinitely many often.

Proof. Let {ℑt } be the filtration generated by {ξ ( t )} . It is obvious that {ξ ( t ) , x ( t ) , y ( t )} is a
strong Feller-Markov process with respect to the filtration {ℑt } . For a stopping time ς , the σ −

algebra at ς is ℑς = { A ∈ ℑ∞ : A ∩ {ς ≤ t} ∈ ℑt , ∀t ∈ » + } . Fix a T1 > 0 , by Lemma 3.8, we can define
almost surely finite stopping times

η1 = inf {t > 0: x ( t ) ≥ δ , y ( t ) ≥ δ }
η2 = inf {t > η1 + T1 : x ( t ) ≥ δ , y ( t ) ≥ δ }
..........

ηn = inf {t > ηn −1 + T1 : x ( t ) ≥ δ , y ( t ) ≥ δ }


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

For a stopping time ς , we write τ (ς ) for the first jump of ξ ( t )

{

61

after ς , i.e.,

}

τ (ς ) = inf {t > ς : ξ ( t ) ≠ ξ (ς )} . Let σ (ς ) = τ (ς ) − ς and Ak = σ (ηk ) < T1 , k ∈ » . Obviously, Ak is
in the σ − algebra generated by {ξ (η n + s ): s ≥ 0} and Ak ∈ ℑηk +1 also. Therefore, in view of the
strong Markov property of (ξ ( t ) , x ( t ) , y ( t ) ) and [see 15, Theorem 5, p. 59] we have
Ρ  Ak ξ (η k ) = ±  = Ρ σ ( 0 ) > T1 ξ ( 0 ) = ±  .


Hence,


( )

Ρ Ak = Ρ σ (η k ) > T1 ξ (η k ) = +  Ρ ξ (η k ) = +  + Ρ σ (η k ) > T1 ξ (η k ) = −  Ρ ξ (η k ) = − 
= Ρ σ ( 0 ) > T1 ξ ( 0 ) = +  Ρ ξ (η k ) = +  + Ρ σ ( 0 ) > T1 ξ ( 0 ) = −  Ρ ξ (η k ) = −  ≤ p

{(

)}
)} =

where p = max Ρ σ ( 0 ) > T1 ξ 0 = ± < 1. Moreover,

{

Ε 1Ak +11Ak ξ (η k +1 ) , x (η k +1 ) , y (η k +1

{
= Ε {1 Ε 1
= Ε {1 Ε 1


}
)}

= Ε Ε 1Ak +11Ak ℑηk +1  ξ (η k +1 ) , x (η k +1 ) , y (η k +1 )


Ak

Ak


ℑηk +1  ξ (η k +1 ) , x (η k +1 ) , y (η k +1

Ak +1

Ak +1

(ξ (η ) , x (η ) , y (η ) )
k +1

k +1

k +1

}

ξ (η k +1 ) , x (η k +1 ) , y (η k +1 ) 

{

= Ε 1Ak ξ (η k +1 ) , x (η k +1 ) , y (η k +1 )  Ε 1Ak +1 (ξ (η k +1 ) , x (η k +1 ) , y (η k +1 ) ) 

}

which implies that

{

} {


}{

}

Ρ Ak +1 ∩ Ak ξ (η k +1 ) = ± = Ρ Ak +1 ξ (η k +1 ) = ± Ρ Ak ξ (η k +1 ) = ± .

(3.10)

Therefore, from (3.10) and the equation

) {

(

}

{

}

Ρ Ak +1 ∩ Ak = Ρ Ak +1 ∩ Ak ξ (η k +1 ) = + Ρ {ξ (η k +1 ) = +} + Ρ Ak +1 ∩ Ak ξ (η k +1 ) = − Ρ {ξ (η k +1 ) = −}
it follows

(

)

{

}


{

}

Ρ Ak +1 ∩ Ak ≤ p Ρ Ak ξ (η k +1 ) = + Ρ {ξ (η k +1 ) = +} + p Ρ Ak ξ (η k +1 ) = − Ρ {ξ (η k +1 ) = −} ≤ p
Continuing this way, we conclude that
 n 
 n

Ρ  ∪ Ai  = 1 − Ρ  ∩ Ai  ≤ 1 − p
 i=k 
 i =k 

( )

Consequently,
 +∞ +∞ 
Ρ  ∩∪ Ai  = 1.
 k =1 i = k 

n − k +1

2


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

62


Let ∆ = min { x± ( t + s , t , x0 , y0 ) , y± ( t + s , t , x0 , y0 ) : t ∈ [0, T1 ] , s ∈ [ 0, T ] , x0 , y0 ∈ [δ , M ]} > 0 , if Ak
occurs, xτ (ηk ) > ∆ , yτ (ηk ) > ∆ , which directly implies the first assertion. As a result, we are able to
define finite stopping times

η 1 = inf {n > 0: ∆ ≤ xn , yn ≤ M * }
η 2 = inf {t > η1 : ∆ ≤ xn , yn ≤ M * }
..........

η k = inf {t > ηk −1 : ∆ ≤ xn , yn ≤ M * }.

{

}

∆ = min x± ( t + s , t , x0 , y0 ) , y± ( t + s , t , x0 , y0 ) : t ∈ [ 0, T1 ] , s ∈ [0, T ] , x0 , y0 ∈  ∆, M *  > 0.

Put

{

Note that if the event Bk = σ η

k

+1

< T1

}


occurs then ∆ ≤ xηk , yηk , xηk +1 , yηk +1 ≤ M * . Using arguments

similar to the previous part of this proof, we can show that Bk occurs infinitely often. Consequently,
we obtain the second assertion of this lemma due to the fact that η k is odd then η k + 1 is even and
conversely.
Next, we will describe the ω − limit sets of the system (2.4). Denoted by Ω ( x, y , ω ) the ω − limit
set of the solution

( x ( t ,0, x, y ) , y ( t ,0, x, y ) ) (ω )

starting in

( x, y ) .

To simplify the notations, for

t ≥ s ≥ 0 , we denote

π t+,s ( x, y ) := ( x+ ( t , s, x, y ) , y+ ( t , s, x, y ) )

; resp π t−,s ( x, y ) := ( x− ( t , s, x, y ) , y− ( t , s, x, y ) ) ) is the

solution to the system (2.5) (resp. (2.6)) starting at ( x, y ) ∈ » 2+ at time s.
Suppose that the solution starting at γ +* ( 0 ) = ( x+* ( 0 ) , y+* ( 0 ) ) at time 0 is a periodic solution to the
system (2.5), we now describe the pathwise dynamic behavior of the solutions of system (2.4). Put

Γ=

{( x, y ) = π


( −1)
tn , tn−1
n

}

...π t−3 , t2 π t+3 , t2 (γ +* ( 0 ) ) : 0 = t1 ≤ t2 ≤ ... ≤ tn ; n ∈ »

(3.11)

where γ +* ( 0 ) is mentioned above. Let us ( x0 , y0 ) ∈ » 2+ .

Theorem 3.11. Suppose that on the quadrant int » 2+ , the system (2.5) has unique stable

T − periodic solution ( x+* ( t ) , y+* ( t ) ) and with λ mentioned in Proposition 3.6, let λ > 0 . Then,
a) With probability 1, the closure Γ of Γ is a subset of the ω − limit set Ω ( x0 , y0 , ω ) .

( )
 h (+ ,t , z ) h (− ,t , z ) 
≠0
det 

 h (+ ,t , z ) h (−,t , z)



b) If there exists a z = x , y such that the point z = π t+, 0 γ +* ( 0 ) satisfies the following condition
1

2


1

2

(3.12)


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

(
(

63

) ( ) ( ) ( )
) ( ) ( ) ( )

h ξ , t , z = a ξ , t − b ξ , t x − c ξ , t y
t
t
t
 1 t
where 
 h2 ξt , t , z = − d ξt , t + e ξt , t x − f ξt , t y


Then, with probability 1, the closure Γ of Γ is the ω - limit set Ω ( x0 , y0 , ω ) . Moreover, Γ
absorbs all positive solutions in the sense that for any initial value ( x0 , y0 ) ∈ int » 2+ , the value


γ (ω ) = inf {t > 0:( x ( s,0, x0 , y0 , ω ) , y ( s,0, x0 , y0 , ω ) ) ∈ Γ , ∀s > t}
is finite outside a P-null set.

Proof. Denote Η := Η ∆ , M * . We construct a sequence of stopping times

η1 = inf { 2k : ( x2 k , y2 k ) ∈ Η}
η2 = inf {2k > η1 : ( x2 k , y2 k ) ∈ Η}
..........

ηn = inf {2k > ηn −1 : ( x2 k , y2 k ) ∈ Η}.
It is easy to see that the events {ηk = n} ∈ ℑ0n for any k; n. Thus the event {ηk = n} is independent
of ℑ∞0
if ξ 0 is given. By the Proposition 3.1 and Lemma 3.10, ηn < ∞ a.s for all n. For simplicity, we put
mod ( t ) = t − kt T where kt is such a integer that kt T ≤ t ≤ ( kt + 1) T . As a convention, the notation
mod ( t ) ∈ ( −δ , δ ) means mod ( t ) ∈ [0, δ ) ∪ (T − δ , T ) .

By U ε ( x, y ) , we denote neighborhood of

point ( x, y ) with radius ε > 0 and φ ( t , s, x, y ) = ( x ( t , s, x, y ) , y ( t , s, x, y ) ) .
Firstly, we prove that for any ε1 > 0, δ1 > 0, there are infinitely many odd stopping times such that

( x2n+1 , y2n+1 ) ∈U ε γ +* ( 0 )

and mod (τ 2 n +1 ) ∈ ( −δ1 , δ1 ) . We have

1

π +t +τη

k +1


where

, τηk +1

(x

η k +1

)

(

(

)

, yηk +1 = π +t + mod τ , mod τ
x ,y
= π +t − T + mod τ , 0 x , y
( ηk +1 ) ( ηk +1 ) ηk +1 ηk +1
( ηk +1 )

(x, y ) = π

+

(

T , mod τηk +1


)

(x

η k +1

)

)

, yηk +1 . Therefore, applying the Lemma 2.2 obtains, for any

neighborhood U ε1 γ +* ( 0 )  , there exists T * > 0 and δ 2 so that

)
This is equivalent to t ∈ ( − mod (τ ) + K T − δ , − mod (τ ) + K T + δ ) K ≥ K
is the smallest natural number satisfying − mod (τ ) + K T − δ > T .
Note that, − mod (τ ) + K T < T + T := T .
π +t +τη

k

, τηk

(x

ηk

)


(

, yηk ∈U ε1 γ +* ( 0 )  ; ∀t > T * , mod t + τ ηk ∈ ( −δ 2 , δ 2 ) .
ηk

2

ηk

2

*

ηk +1

*

ηk +1

Now, let δ 3 = min {δ1 , δ 2 } ; for any u > 0, δ 3 > 0, k ∈ » , put

2

, in which K ∈ »


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

64


{

(

(

)

(

)} .

)

Ak = ω : σ ηk +1 ∈ − mod τηk +1 + K T − δ 3 , − mod τ ηk +1 + K T + δ 3

Note that if X has the exponential distribution then Ρ {t < X < t + a} ≥ Ρ {s < X < s + a}
whenever t ≤ s . Using the strong Markov property of {ξ ( t ) , x ( t ) , y ( t )} and noting that we have
already known the value of ξτη , we have the estimation
k

{ } {

(

(

)


(

)

Ρ Ak = Ρ σ ηk +1 ∉ − mod τηk +1 + K T − δ 3 , − mod τηk +1 + K T + δ 3
+∞

=



{

(

)}

τη k = t

{

(

)}

τηk = t , ξτη = +

{

(


)}

ξτη = +

{

(

Ρ σ ηk +1 ∉ − mod ( t ) + K T − δ 3 , − mod ( t ) + K T + δ 3

0
+∞

=



Ρ σ ηk +1 ∉ − mod ( t ) + K T − δ 3 , − mod ( t ) + K T + δ 3

0
+∞

=



Ρ σ ηk +1 ∉ − mod ( t ) + K T − δ 3 , − mod ( t ) + K T + δ 3

+∞




Ρ σ ηk +1 ∉ T − δ 3 , T + δ 3

)}

ξτη = +
k

0
+∞

{ (

= Ρ σ 3 ∉ T − δ3 , T + δ3

)} ∫ Ρ{τ

{

× Ρ τ ηk ∈ d t

} {

{

× Ρ τ ηk ∈ d t

k


k

0



)}
{

× Ρ τ ηk ∈ d t

{

× Ρ τ ηk ∈ d t

}

}

}

(

∈ dt = Ρ σ 3 ∉ T − δ 3 , T + δ 3

ηk

}


)} := ϕ < 1

0

{

We now estimate Ρ Ak ∩ Ak + 2

(ξ ( t ) , x ( t ) , y ( t ) )

{

}

. Since Ak ∈ ℑηk +2 , applying the strong Markov property of

we have

}

{
}
} = Ε 1 Ε{1

{
}
= −} ≤ ϕ Ε (1 ) = ϕ


Ρ Ak ∩ Ak +1 = Ε  Ε 1A 1A

ℑηk +1  = Ε 1A Ε 1A
ℑηk +1 
k
k +1
k +1


 k


{

= Ε 1A Ε 1A ℑηk +1
k +1
 k

Ak

Ak +1

ξτη

k +1

Ak

2

.


Continuing this way, we have,

{

(

(

)

(

)

}

)

Ρ {∩+k =∞1 ∪ +i =∞k Ai } = Ρ ω : σ ηk +1 ∈ − mod τηk +1 + K T − δ 3 , − mod τ ηk +1 + K T + δ 3 i. o. of n = 1 .
The even Ak occurs infinitely means that, with probability 1, for any δ1 > 0 , for any U ε1 γ +* ( 0 )  ,
there

are

infinitely

n = n (ω ) ∈ »

many


such

that

mod (τ 2 n +1 ) ∈ ( −δ 3 , δ 3 ) ⊂ ( −δ1 , δ1 ) . Thus γ +* ( 0 ) ∈ Ω ( x0 , y 0 ,

1

( x2n , y2n ) ∈U ε (γ )
2

2

1

and

{π γ ( 0 ) : t ≥ 0 }∈ Ω ( x , y , ω ) a.s. To do this, we show that for
(γ ) , ∀δ > 0 , there are infinitely many even stopping times such that

Secondly, we prove

γ := π t−,0 γ +* ( 0 )  , ∀U ε

( x2n+1 , y2n+1 ) ∈U ε γ +* ( 0 )
ω ).


t,0


*
+

0

0

4

and mod (τ 2 n ) ∈ ( mod ( t1 − δ 4 ) , mod ( t1 + δ 4 ) ) . By continuity of solutions with


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

ε3 > 0 , δ5 > 0 , δ6 > 0

respect to initial conditions, there are
∀ ( x, y ) ∈ U ε 3 γ

*
+

( 0 ) , ∀t ∈ ( t1 − δ 5 , t1 + δ 5 )

π t−,0 γ +* ( 0 ) − π t−,0 γ +* ( 0 ) <
1

π t−,0 ( x , y ) − π t−, 0 γ +* ( 0 )  <
π t−+ s , s ( x , y ) − π t−,0 ( x , y ) <


65

small enough so that if

and ∀ mod ( s ) ∈ ( −δ 6 , δ 6 ) then

ε2
3

ε2
3

ε2
3

.

Therefore,

π t−+ s , s ( x , y ) − π t−, 0 γ +* ( 0 )  ≤ π t−+ s , s ( x , y ) − π t−,0 ( x , y )
1

+ π t−,0 ( x , y ) − π t−,0 γ +* ( 0 ) + π t−, 0 γ +* ( 0 )  − π t−1 ,0 γ +* ( 0 ) < ε 2 ,
∀ ( x , y ) ∈U ε 3 γ +* ( 0 )  , ∀t ∈ ( t1 − δ 5 , t1 + δ 5 ) , ∀ mod ( s ) ∈ ( −δ 6 , δ 6 ) .
Put

{
= inf { 2k + 1 >

ς1 = inf 2k + 1: ( x2 k +1 , y2 k +1 ) ∈ U ε γ +* ( 0 ) , mod (τ 2 k +1 ) ∈ ( −δ 6 , δ 6 )

ς2

3

}

ς 1 : ( x2 k +1 , y2 k +1 ) ∈ U ε γ +* ( 0 ) , mod (τ 2 k +1 ) ∈ ( −δ 6 , δ 6 )
3

}

........

{

}

ς n = inf 2k + 1 > ς n −1 : ( x2 k +1 , y2 k +1 ) ∈ U ε γ +* ( 0 ) , mod (τ 2 k +1 ) ∈ ( −δ 6 , δ 6 ) .
3

From the previous part of this proof, it follows that ς k < + ∞

{ς k = n}∈ ℑ , {ς k }

{

n
0

and lim ς k = + ∞ a.s.. Since

k→+∞

is independent of ℑ . Put t = min {δ 4 , δ 5 } . By the same argument as above we

n

(
)
(
)
}
∈ ( t − t , t + t ) which implies ( x , y ) ∈ U (γ ) for many infinite k ∈ » and
mod (τ ) ∈ ( mod ( t − t ) , mod ( t + t ) ) ⊂ ( mod ( t − δ ) , mod ( t + δ ) ) .

obtain Ρ ω : σ ς n +1 ∈ t1 − t , t1 + t i. o. of n = 1. This relation says that xς k , yς k ∈ U ε 3 γ +* ( 0 )  and

σς

k

+1

1

ς k +1

1

ς k +1


1

ς k +1

1

ε2

1

4

1

4

This means γ ∈ Ω ( x0 , y0 , ω ) a.s..
Lastly, by similar way and induction, we conclude that Γ is a subset of Ω ( x0 , y0 , ω ) . Because
Ω ( x0 , y0 , ω ) is a close set, we have Γ ⊂ Ω ( x0 , y0 , ω ) a.s..

(

b) We now prove the second assertion of this theorem. Let z = x , y

)

satisfying the condition

(3.12). By the existence and continuous dependence on the initial values of the solutions, there exist
two numbers a > 0


and b > 0

()

such that the function ϕ ( s, t ) = π t+, s π t−, s z

continuously differentiable in ( −a , a ) × ( −b , b ) .

is defined and


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

66

We note that

(
(

)
)

(
(
) (
) (

)

)

 x h +, x, y
x h1 − , x , y 
1
 ∂ϕ ∂ϕ 


det 
,
= det 


 ∂s ∂t  (t , t )
 y h2 + , x , y y h2 − , x , y 


 h +, z
h1 − , z 
1

 ≠ 0.
= x y det 

 h2 + , z
h2 − , z 



(

(

)
)

Therefore, by Theorem of Inverse Function, there exist 0 < a1 < a , 0 < b1 < b such that ϕ ( s, t ) is a
diffeomorphism between V = ( 0, a1 ) × ( 0, b1 ) and U = ϕ (V ) . As a consequence, U is an open set.
Moreover,

for

( x, y ) = π t+ , s π t−, s
*

*

every
*

( x, y ) ∈ U

,

there

exists

( z ) ∈ Γ . Hence, U ⊂ Γ ⊂ Ω ( x

0


a ( s* , t * ) ∈ V = ( 0, a1 ) × ( 0, b1 )

such

that

, y0 , ω ) . Thus, there is a stopping time γ < + ∞ a.s.

such that ( x (γ ) , y ( γ ) ) ∈ U . Since Γ is a forward invariant set and U ⊂ Γ , it follows that

( x (t ) , y (t )) ∈ Γ , ∀ t > γ

with probability 1. The fact

( x (t ) , y (t )) ∈ Γ

for all t > γ implies that

Ω ( x0 , y0 , ω ) ⊂ Γ . By combining with the part a) we get Ω ( x0 , y0 , ω ) = Γ a.s..The proof is

complete.

4. Simulation and discussion
Noting that λ can be estimated by using the law of large number and formula (3.4) for an initial
concrete set. We will illustrate the above model by following numerical examples in three cases.

Example I. λ > 0 and the coexistence case presents in both states (see figure 3). It corresponds to

α = 0.6 ; β = 0.4 ; a ( + ) = 10 + sin t ; b ( + ) = 2 +

d (+) = 1−

1
cos t ; c ( + ) = 1 ;
2

1
1
 π
 π
cos  t −  ; e ( + ) = 1.8 ; f ( + ) = 3.1 + sin  t +  ;
5
3
2

 6

a ( − ) = 11.7 − sin ( t + π ) ; b ( − ) = 1.5 +

1
cos t ;
4

c ( − ) = 1.4 −

1
1
 π
sin  t +  ; d ( − ) = 2.1 + sin ( t + π ) ;
2

6
 2

e ( − ) = 1.2 +

1
1
 π
cos t ; f ( − ) = 2.7 − cos  t + 
4
2
 5


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

67

the initial condition ( x ( 0 ) , y ( 0 ) ) = ( 2.5 ; 2.8 ) and number of switching n = 300. In this example,
the periodic T = 2π , the solution of (2.4) switches between two positive periodic orbit of the systems
(2.5) and (2.6).

Figure 3. Orbit of the system (2.4) in example I.

Example II. λ > 0 and one state is coexistence, the other is extinction of predator. The system
(2.5) with coefficients
1
a ( + ) = 12 + sin π t ; b ( + ) = 2.8 + cos π t ;
2
1

1
π 

c ( + ) = 2.4 + sin (π t + π ) ; d ( + ) = 1.2 − cos  π t −  ;
4
3
12 

e ( + ) = 2.4 −

1
1
π

sin π t ; f ( + ) = 2.4 + sin  π t +  ;
2
3
6


has a stable positive periodic solution and the system (2.6) with coefficients
a ( − ) = 6.1 − sin (π t + π ) ; b ( − ) = 1.6 +

1
cos π t ;
3

c ( − ) = 2.4 −

1

π
1
π


sin  π t +  ; d ( − ) = 6 + sin  π t +  ;
2
2
6
2



e ( − ) = 0.5 +

1
1
π

cos π t ; f ( − ) = 1.9 − cos  π t + 
4
2
5


has predator tending to 0. The number of switching n = 300 , transition intensities
α = 0.3 , β = 0.7 and initial condition ( x ( 0 ) , y ( 0 ) ) = (1.2, 3.4 ) . Since λ > 0 , the system (2.4) is
persistent (see figure 4).



68

L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

Figure 3. Orbit of the system (2.4) in example II.

This work provides some results about the asymptotic behavior of a system of two coupled
deterministic predator-prey models switching at random. The formula for the value λ can not be
explicitly computed. However, it is easy to approximate it by simulation. When λ > 0 the dynamics of
the predator-prey system leads to the existence of a periodic Markov process, which plays an
important role in the study of the development of communities.

References
[1] K. Gopalsamy. Global asymptotic stability in a periodic Lotka-Volterra system, J.Austral. Math. Soc. Ser , B 27
(1985), pp. 66-72.
[2] A. Tineo. Permanence of a large class periodic predator-prey systems, J.Math.Anal.Appl, 241 (2000), pp. 83-91.
[3] P.Yang, R. Xu. Global attractivity of the periodic Lotka-Volterra system, J. Math. Anal. Appl, 233 (1999), no. 1,
pp. 221-232.
[4] J. Zhao, W. Chen. Global asymptotic stability of a periodic ecological model, Appl.Math.Comput, 147 (2004), pp
881-892.
[5] Z. Amine, R. Ortega. A periodic prey-predator system, J.Math.Anal.Appl, 185 (1994), pp. 477-489.
[6] M.Bardi. Predator-prey models in periodically uctuating environments, J.Math.Biology, 12 (1981), pp. 127-140.
[7] J. M. Cushing. Periodic time-dependent predator-prey system, Siam J. Appl. Math, Vol. 32, No. 1 (January
1977), pp. 82-95.
[8] J. Lpez-Gmez, R. Ortega, A. Tineo. The periodic predator-prey Lotka-Volterra model. Adv.Differential
Equations, 1 (1996), no. 3, pp. 403-423.
[9] A. Tineo. On the asymptotic behavior of some population models, J.Math.Anal.Appl, 167 (1992), pp. 516-529.
[10] F. Zanolin, T. Ding, H. Huang. A priori bounds and periodic solutions for a class of planar systems with
applications to Lotka-Volterra equations, DCDS, Vol. 1, No 1 (January 1995), pp. 103-117.
[11] H. Kesten, Y. Ogura. Recurrence properties of Lotka-Volterra models with random uctuations. J. Math. Soc.

Japan, 33 (1981), no. 2, pp. 335-366.
[12] M. Liu, K. Wang. Persistence, extinction and global asymptotical stability of a nonautonomous predator-prey
model with random perturbation. Appl. Math. Model, 36 (2012), no. 11, pp. 5344-5353.


L.H. Lan / VNU Journal of Science: Mathematics – Physics, Vol. 30, No. 3 (2014) 49-69

69

[13] S. S. De. Random predator-prey interactions in a varying environment: extinction or survival Bull. Math. Biol, 46
(1984), no. 1, pp. 175-184.
[14] P. Auger, N. H. Du, N. T. Hieu. Evolution of Lotka-Volterra predator-prey systems under telegraph noise. Math.
Biosci. Eng, 6 (2009), no. 4, 683-700.
[15] I.I Gihman and A.V. Skorohod. The Theory of Stochastic Processes. Springer -Verlag Berlin Heidelberg New
York 1979.
[16] R.S. Lipshter and Shyriaev. Statistics of Stochastic Processes. Nauka, Moscow 1974 (in Russian).



×