Tải bản đầy đủ (.pdf) (24 trang)

Existence of stationary distributions for Kolmogorov systems of competitive type under telegraph noise

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.55 MB, 24 trang )

JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.1 (1-24)

Available online at www.sciencedirect.com

ScienceDirect
J. Differential Equations ••• (••••) •••–•••
www.elsevier.com/locate/jde

Existence of stationary distributions for Kolmogorov
systems of competitive type under telegraph noise
N.H. Dang a,1 , N.H. Du b,2 , G. Yin a,∗,1
a Department of Mathematics, Wayne State University, Detroit, MI 48202, USA
b Department of Mathematics, Mechanics and Informatics, Hanoi National University, 334 Nguyen Trai, Thanh Xuan,

Hanoi, Viet Nam
Received 24 July 2013; revised 27 March 2014

Abstract
This work focuses on population dynamics of two species described by Kolmogorov systems of competitive type under telegraph noise that is formulated as a continuous-time Markov chain with two states. Our
main effort is on establishing the existence of an invariant (or a stationary) probability measure. In addition,
the convergence in total variation of the instantaneous measure to the stationary measure is demonstrated
under suitable conditions. Moreover, the Ω-limit set of a model in which each species is dominant in a state
of the telegraph noise is examined in detail.
© 2014 Elsevier Inc. All rights reserved.

MSC: 34C12; 60H10; 92D25
Keywords: Kolmogorov systems of competitive type; Telegraph noise; Stationary distribution; Ω-limit set; Attractor;
Markovian switching


* Corresponding author.

E-mail addresses: (N.H. Dang), (N.H. Du),
(G. Yin).
1 This research was supported in part by the National Science Foundation DMS-1207667.
2 This research was supported in part by NAFOSTED No. 101.03.2014.58.
/>0022-0396/© 2014 Elsevier Inc. All rights reserved.


JID:YJDEQ AID:7505 /FLA

2

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.2 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

1. Introduction
Kolmogorov systems of differential equations have been used to model the evolution of many
biological and ecological systems. An example is the well-known competitive Lotka–Volterra
model, which represents the dynamics of the population sizes of different species in an ecosystem [2,5–7,10,18]. It has been well recognized that the traditional models are often not adequate
to describe the reality due to random environment and other random factors. Recently, resurgent
attention has been drawn to treat systems that involve both continuous dynamics and discrete
events. For example, using the common terminology of ecology, we considered a Lotka–Volterra
model in which the growth rates and the carrying capacities are subject to environmental noise;
see related models and formulations as well as various definitions of terms in [18] (see also [17],
also related works [13] and [14]). It was noted that the qualitative changes of the growth rates and
the carrying capacities form an essential aspect of the dynamics of the ecosystem. These changes
usually cannot be described by the traditional deterministic population models. For instance, the
growth rates of some species in the rainy season will be much different from those in the dry

season. Note that the carrying capacities often vary according to the changes in nutrition and/or
food resources. Likewise, the interspecific or intraspecific interactions differ in different environments. The environment changes cannot be modeled as solutions of differential equations in the
traditional setup. They are random discrete events that happen at random epochs. A convenient
formulation is to use a continuous-time Markov chain taking values in a finite set. The result
dynamic systems become nowadays popular so-called regime-switching differential equations.
In this work, we consider a two-dimensional system that is modulated by a Markov chain taking values in M = {1, 2}. Individual equations corresponding to the states 1 and 2 are different.
Thus in lieu of one system of Kolmogorov equations, one needs to deal with systems of equations
correspond to each state in M. Our focus in this work is devoted to analyzing ergodic behavior
of such systems. In the recent work [3], assuming that the random environment is represented
by a continuous-time, two-state Markov chain, we obtained certain limit results and depicted the
Ω-limit set for systems that have a globally stable positive equilibrium. In this paper, we concentrate on the problem: What are sufficient conditions that ensure the ergodicity of the Kolmogorov
systems? That is, what are conditions to ensure the existence of stationary distributions for such
systems. It is well known that the coupling owing to the Markov chain makes the underlying
systems more difficult to analyze. For example, in the study of stability, it has been known that a
system resulted from two individual stable differential equations coupled by a Markov chain may
be unstable. So our intuition may not always give the correct conclusion. By carefully analyzing
such systems, this paper provides sufficient conditions for existence of stationary distributions of
competitive type Kolmogorov systems.
The rest of the paper is arranged as follows. The formulation of the problem is given in Section 2. Then Section 3 takes up the issue of the existence of an invariant probability measure.
Section 4 continues the investigation by focusing on Ω-limit sets and properties of the invariant
measure. Section 5 deals with dynamics of the system when each species dominates in one state.
Finally, Section 6 concludes the paper with further remarks.
2. Problem formulation
Let (Ω, F, P) be a complete probability space and {ξ(t) : t ≥ 0} be a continuous-time Markov
chain defined on (Ω, F, P), whose state space is a two-element set M = {1, 2} and whose genq q12
−α α
= β −β with α > 0 and β > 0. Note that we use Ω in lieu
erator is given by Q = q11
21 q22



JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.3 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

3

of the usual Ω to denote the sample space, denote an element of Ω by ω, and reserve Ω for the
notion of omega-limit set to avoid notional conflict. It follows that, = (p1 , p2 ), the stationary
distribution of {ξ(t) : t ≥ 0} satisfying the system of equations
Q=0
p1 + p2 = 1,
is given by
β
,
t→∞
α+β
α
.
p2 = lim P ξ(t) = 2 =
t→∞
α+β
p1 = lim P ξ(t) = 1 =

(2.1)

Such a two-state Markov chain is commonly referred to as telegraph noise because of the graph
of its sample paths. Let Ft be the σ -algebra generated by ξ(·) up to time t and P-null sets. The

filtration {Ft }t≥0 satisfies the usual condition. That is, it is increasing and right continuous while
F0 contains all P-null sets. Then (Ω, F, {Ft }, P) is a complete filtered probability space.
In this paper, we focus on a Kolmogorov system under telegraph noise given by
x(t)
˙ = x(t)a ξ(t), x(t), y(t)
y(t)
˙ = y(t)b ξ(t), x(t), y(t) ,

(2.2)

where ai (x, y) and bi (x, y) are real-valued functions defined for i ∈ M and (x, y) ∈ R2+ , and
are continuously differentiable in (x, y) ∈ R2+ = {(x, y) : x ≥ 0, y ≥ 0}. Note that in the above
and henceforth, we write ai (x, y) instead of a(i, x, y) to distinguish the discrete state i with
the continuous state (x, y). Because of the telegraph noise ξ(t), the system switches randomly
between two deterministic Kolmogorov systems
x(t)
˙ = x(t)a1 x(t), y(t)
y(t)
˙ = y(t)b1 x(t), y(t) ,
x(t)
˙ = x(t)a2 x(t), y(t)
y(t)
˙ = y(t)b2 x(t), y(t) .

(2.3)

(2.4)

Stemming from models in classical competitive ecosystems, throughout this paper, we impose
the following two assumptions.

Assumption 2.1. For each i ∈ M, ai (x, y) and bi (x, y) are continuously differentiable in
(x, y) ∈ R2+ . Moreover,
(x,0)
1. ∂ai∂x
< 0 ∀x > 0 and i ∈ M.
2. ai (0, 0) > 0, lim supx→∞ ai (x, 0) < 0.
(0,y)
3. ∂bi∂y
< 0 ∀y > 0 and i ∈ M.
4. bi (0, 0) > 0, lim supy→∞ bi (0, y) < 0.


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.4 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

4

Assumption 2.2. For any (x0 , y0 ) ∈ R2+ , there is a compact set D = D(x0 , y0 ) ⊂ R2+ containing
(x0 , y0 ) such that D is an invariant set under both systems (2.3) and (2.4).
These conditions are satisfied for all well-known competitive models in ecology. Under these
assumptions, we can derive the existence and uniqueness of a global positive solution given
2,◦
2,◦
the initial value (x(0), y(0)) = (x0 , y0 ) ∈ R+
, where R+
= {(x, y) : x > 0, y > 0} denotes the
2

interior of the set R+ . Moreover, it is noted in Assumption 2.1 that we only impose conditions on
the boundary so that this assumption can be satisfied by an even wider range of models. That is,
not only are the conditions satisfied for competitive models, but also for some cooperative ones.
Consider two equations on the boundary
u(t)
˙ = u(t)a ξ(t), u(t), 0 ,

u(0) ∈ (0, ∞)

(2.5)

v(t)
˙ = v(t)b ξ(t), 0, v(t) ,

v(0) ∈ (0, ∞).

(2.6)

For (2.5), it is easily seen that the pair of processes (ξ(t), u(t)) is Markovian and the associated
operator of the Markov process is given by
2

Lgi (u) =

qij gj (u) + ua1 (u, 0)
j =1

d
gi (u),
du


for i ∈ M,

and for each gi (u) defined on (0, ∞) and continuously differentiable in u. By Assumption 2.1,
there is a unique pair (u1 , u2 ) satisfying a1 (u1 , 0) = 0 and a2 (u2 , 0) = 0. In case u1 = u2 , without
loss of generality, assume u1 < u2 . Under Assumption 2.1, the process (ξ(t), u(t)) has a unique
invariant probability measure concentrated on M × [u1 , u2 ]. The stationary density (μ1 , μ2 )
of (ξ(t), u(t)) can be obtained from the solution of the system of Fokker–Planck equations
L∗ μi (u) = 0 for i ∈ M (L∗ denotes the adjoint of L)

d


ua1 (u, 0)μ1 (u) = 0,
⎨ −αμ1 (u) + βμ2 (u) −
du

d

⎩ αμ1 (u) − βμ2 (u) −
ua2 (u, 0)μ2 (u) = 0.
du

(2.7)

Solving the system of equations above, we obtain
μ1 (u) =

θ F (u)
,

u|a1 (u, 0)|

μ2 (u) =

θ F (u)
,
u|a2 (u, 0)|

where
u

β
α
+
dτ ,
τ a1 (τ, 0) τ a2 (τ, 0)

F (u) = exp −

u ∈ [u1 , u2 ], u =

u
u2

θ=

p1
u1

F (u)

F (u)
+ p2
du
u|a1 (u, 0)|
u|a2 (u, 0)|

−1

.

u1 + u2
,
2


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.5 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

5

Likewise, under Assumption 2.1, for (2.6), there exist v1 , v2 such that for v1 < v2 , the stationary
density of (ξ(t), v(t)), say (ν1 , ν2 ), is given by
ν1 (v) =

ζ G(v)
,
v|b1 (0, v)|


ν2 (u) =

ζ G(v)
,
v|b2 (0, v)|

where
v

G(v) = exp −
v
v2

ζ=
v1

β
α
+
dτ ,
τ b1 (0, τ ) τ b2 (0, τ )

G(v)
G(v)
+ p2
dv
p1
v|b1 (0, v)|
v|b2 (0, v)|


v ∈ [v1 , v2 ], v =

v 1 + v2
,
2

−1

.

In case u1 = u2 (resp. v1 = v2 ), (μ1 (u), μ2 (u)) = (δu−u1 , δu−u1 ) (resp. (ν1 (u), ν2 (u) = (δv−v1 ,
δv−v1 ))) is a generalized density given by δu−u1 (resp. δv−v1 ), where δ· is the Dirac function.
Remark 2.1. Analyzing the dynamics on the boundary provides us with important properties
of positive solutions. To gain insight, let us first look at the deterministic system (2.3). On the
boundary, there are three equilibria (0, 0), (u1 , 0), and (0, v1 ). Under Assumption 2.1, the origin
is a source and other solutions cannot approach it. On the other hand, we note that the eigenvalues of the Jacobian matrix

∂(xa1 (x,y)) ∂(xa1 (x,y))
∂x
∂y
∂(yb1 (x,y)) ∂(yb1 (x,y))
∂x
∂y

1
are v1 ∂b
∂x (0, v1 ) and a1 (0, v1 ). In view of

1

Assumption 2.1, ∂b
∂x (0, v1 ) < 0. Therefore, a sufficient condition for (0, v1 ) to repel positive solutions is that it is a saddle point or equivalently a1 (0, v1 ) > 0. In the same manner, we need
b1 (u1 , 0) > 0 to guarantee that (u1 , 0) does not attract positive solution. Using basic results from
the dynamical systems theory, it is not hard to show that under theses conditions, system (2.3) is
permanent, that is any positive solution will never approach the boundary. The idea used above
can be generalized to treat our random system (2.2) in which invariant measures take the role of
equilibria. Moreover, the values a1 (0, v1 ), b1 (u1 , 0) now need to be replaced with the expected
values of a(ξ(t), 0, v(t)) and b(ξ, u(t), 0) with respect to their corresponding invariant measures,
respectively. For this reason, we introduce λ1 and λ2 , which play a crucial role to determine the
dynamical behaviors of (2.2)

λ1 =

p1 a1 (v, 0)ν1 (v) + p2 a2 (v, 0)ν2 (v) dv,

[v1 ,v2 ]

λ2 =

p1 b2 (u, 0)μ1 (u) + q2 b2 (u, 0)μ2 (u) du.

(2.8)

[u1 ,u2 ]

In our recent paper [3], imposing the condition λ1 , λ2 > 0, we have given the Ω-limit set of
positive solutions to (2.2) in some cases. However, the questions whether the positivity of λ1 and
λ2 implies the existence of a finite invariant measure for the process (ξ(t), x(t), y(t)) and what
is the behavior of the omega-limit set if neither (2.3) nor (2.4) has a positive equilibrium are still



JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.6 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

6

open. The purpose of this paper is to address these questions. In Section 3, we prove the existence of an invariant probability measure, provided that λ1 > 0 and λ2 > 0, which is assumed
throughout this paper. Section 4 is an improvement of the results in [3]. In particular, we described the Ω-limit set of system (2.2) requiring only that either system (2.3) or system (2.4) has
a globally stable positive equilibrium. Stability in total variation of solution to (2.2) is obtained.
Furthermore, in Section 5, we consider (2.2) in the case where each species is dominant in one
state.
3. The existence of an invariant probability measure
The trajectory of ξ(t) is piecewise-constant and cadlag (right continuous having left limits)
functions. Let
0 = τ 0 < τ1 < τ2 < · · · < τn < · · ·
be its jump times. Put
σ 1 = τ 1 − τ0 ,

σ 2 = τ 2 − τ1 ,

...,

σn = τn − τn−1 ,

...

σ1 = τ1 is the first jump time from the initial state; σ2 is the time that process ξ(t) sojourns

in the state ξ(τ1 ). . . It is known that {σk }∞
k=1 are conditionally independent given the sequence
.
Note
that
if
ξ(0)
is
given,
then
ξ(τ
{ξτk }∞
n ) is known because the process ξ(t) takes only two
k=1
is
a
sequence
of
independent
random variables taking values in [0, ∞).
values. Hence, {σk }∞
n=1
Moreover, if ξ(0) = 1, then σ2n+1 has the exponential density α1[0,∞) (t) exp(−αt) and σ2n has
the density β1[0,∞) (t) exp(−βt). Conversely, if ξ(0) = 2, then σ2n has the exponential density
α1[0,∞) (t) exp(−αt) and σ2n+1 has the density β1[0,∞) (t) exp(−βt) (see [4, vol. 2, p. 217]).
Here 1[0,∞) = 1 for t ≥ 0 (= 0 for t < 0).
For a positive initial value (x0 , y0 ), we denote by (x(t, ω, x0 , y0 ), y(t, ω, x0 , y0 )) the solution
to Eq. (2.2) at time t , starting in (x0 , y0 ) (or (x(t, x0 , y0 ), y(t, x0 , y0 )), (x(t), y(t)) whenever
there is no ambiguity).
Remark 3.1. We note that, as seen in [3, p. 394], under Assumptions 2.1 and 2.2, there is a

0 < δ < M such that we can suppose without loss of generality that (x(t, x0 , y0 ), y(t, x0 , y0 )) ∈
[0, M]2 \ [0, δ]2 ∀t ≥ 0. Note that δ, M are chosen such that ai (δ, 0), bi (0, δ) > 0 and ai (M, 0),
bi (0, M) < 0 for all i ∈ M.
We now state the main theorem of this section.
Theorem 3.1. If λ1 and λ2 are positive, the Markov process (ξ(t), x(t), y(t)) has an invariant
2,◦
.
probability measure π ∗ on the state space M × R+
To prove this theorem, we need to estimate the average time that (x(t), y(t)) spends on some
2,◦
compact subset of M × R+
. In the proof of [3, Theorem 2.1], we showed that (x(t), y(t))
cannot stay near the boundary for a long time. However, the method in that proof failed to uniformly estimate the sojourn time. In this paper, making use of a suitable stationary process, we
can estimate the sojourn time uniformly with a large probability. Hence, the existence of an
invariant probability measure will be shown. To proceed, we need some auxillary results. We


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.7 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

7

begin with the initial data P{ξ(0) = 1} = p1 , P{ξ(0) = 2} = p2 . It follows that for all subsequent
time t ≥ 0, P{ξ(t) = 1} = p1 , P{ξ(t) = 2} = p2 , which implies that ξ(t) is a stationary process. Therefore, there exists a semigroup of P-measure preserving transformations θ t satisfying
ξ(t + s, ω) = ξ(t, θ s ω) ∀t, s > 0. Let u(ω, t, u0 ) and v(ω, t, v0 ) be solutions of (2.5) and (2.6)
with initial values u0 , v0 respectively. Fix u∗ , v∗ ∈ [δ, M], denote u∗ (ω, t) = u(ω, t, u∗ ) and
v ∗ (ω, t) = v(ω, t, v∗ ) The following lemma holds.

Lemma 3.1. For any ε > 0, there exists a non-random positive number T1 = T1 (ε) such that
with probability 1, |u(t, ω, u0 ) − u∗ (t, ω)| < ε and |v(t, ω, u0 ) − v ∗ (t, ω)| < ε for all t > T1 ,
provided u0 , v0 ∈ [δ, M].
Proof. For simplicity, in this proof, we denote u(t) = u(ω, t, u0 ) and drop ω from u∗ (ω, t).
Without loss of generality, suppose that u0 < u∗ . Owing to the uniqueness of the solutions, we
have δ ≤ u(t) < u∗ (t) ≤ M ∀t ≥ 0.
(0,y)
(x,0)
Let m be a positive number such that ∂ai∂x
≤ −m and ∂ai∂y
≤ −m ∀δ ≤ x, y ≤ M. It is
clear that
d
ln u∗ (t) − ln u(t) = a ξ(t), u∗ (t), 0 − a ξ(t), u(t), 0
dt

≤ −m u∗ (t) − u(t) . (3.1)

The mean value theorem yields
1 ∗
1 ∗
u (t) − u(t) ≥ ln u∗ (t) − ln u(t) ≥
u (t) − u(t) .
δ
M
Hence,
d
ln u∗ (t) − ln u(t) ≤ −m u∗ (t) − u(t) ≤ −mδ ln u∗ (t) − ln u(t) .
dt
In view of the comparison theorem, we obtain

ln u∗ (t) − ln u(t) ≤ (ln u∗ − ln u0 )e−mδt ,
which implies that
u∗ (t) − u(t) ≤ M(ln u∗ − ln u0 )e−mδt ≤ Mδ(ln M − ln δ)e−mδt .
Similarly,
v ∗ (t) − v(t) ≤ Mδ(ln M − ln δ)e−mδt .
Letting t → ∞ obtains the desired result. The proof is complete. ✷
Lemma 3.2. For any ε > 0, there exists a T2 = T2 (ε) > 0 and a subset A ∈ F∞ with P(A) > 1 −ε
such that ∀t > T2 and ω ∈ A,


JID:YJDEQ AID:7505 /FLA

8

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.8 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••
t

1
t

b ξ(s), u∗ (s), 0 ds − λ1 < ε,
0
t

1
t

a ξ(s), 0, v ∗ (s) ds − λ2 < ε.


(3.2)

0

Proof. Since (ξ(t), v(t)) has a unique invariant distribution (whose density is (ν1 (v), ν2 (v))),
t
it follows from the Birkhoff Ergodic theorem that limt→∞ 1t 0 a(ξ(s), 0, v(s))ds = λ1 almost
surely given that v(0) admits (ν1 (v), ν2 (v)) as its density function. Moreover, it is not diffi˜ v)
cult to show that for any given initial value v(0) > 0, and any (i,
˜ ∈ M × (v1 , v2 ), we have
˜ v)
P{(ξ(t), v(t)) = (i,
˜ for some t ≥ 0} = 1. As a result, we obtain the strong law of large numt
bers, that is, for any initial value v(0) > 0, P{limt→∞ 1t 0 a(ξ(s), 0, v(s))ds = λ1 } = 1 (see [1,
t
p. 169] or [11] for more details). In particular, P{limt→∞ 1t 0 a(ξ(s), 0, v ∗ (s))ds = λ1 } = 1.
1 t

Similarly, P{limt→∞ t 0 b(ξ(s), u (s), 0)ds = λ1 } = 1. The rest of this proof is now straightforward. ✷
As a result of Lemmas 3.1 and 3.2, we obtain the following proposition.
Proposition 3.1. We can find a T3 = T3 (ε) ≥ T2 (ε) such that ∀t > T3 and δ ≤ u0 , v0 ≤ M,
t

1
t

b ξ(s, ω), u(s, ω, u0 ), 0 ds − λ1 < ε,
0
t


1
t

a ξ(s, ω), 0, v(s, ω, v0 ) ds − λ2 < ε,

(3.3)

0

for almost all ω ∈ A, where A is the set mentioned in Lemma 3.2.
(0,y)
(x,0)
Let L = max{| ∂ai∂x
|; | ∂bi∂y
| : 0 ≤ x, y ≤ M} > 0, we have the following lemma.

Lemma 3.3. For ε > 0, we can find a γ = γ (ε) ∈ (0, δ] (not depend on (x0 , y0 ) ∈ [0, M]2 \
[0, δ]2 ) such that
• If y(s, ω, x0 , y0 ) < γ ∀0 ≤ s ≤ t , then
y(s, ω, x0 , y0 ) + u(s, ω, x0 ) − x(s, ω, x0 , y0 ) <

ε
L

∀0 ≤ s ≤ t.

ε
L


∀0 ≤ s ≤ t.

• Similarly, if x(s, ω, x0 , y0 ) < γ ∀0 ≤ s ≤ t , then
x(s, ω, x0 , y0 ) + v(s, ω, y0 ) − y(s, ω, x0 , y0 ) <


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.9 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

9

Proof. For any sufficiently small ε1 > 0, let uε1 (t, ω, u0 ) be the solution to
u˙ ε1 (t) = uε1 (t) a ξ(t), uε1 (t), 0 − ε1
1
starting from u0 . We prove that for almost all ω ∈ Ω, 0 ≤ u(t, ω, u0 ) − uε1 (t, ω, u0 ) ≤ Mε
mδ ∀u0 ∈
[δ, M], t ≥ 0.
First, using the comparison theorem, it is clear that 0 ≤ u(t, ω, u0 ) − uε1 (t, ω, u0 ) for all t ≥ 0.
Moreover,

d
ln u(t, ω, u0 ) − ln uε1 (t, ω, u0 )
dt
= a ξ(t, ω), u(t, ω, u0 ), 0 − a ξ(t, ω), uε1 (t, ω, u0 , 0) + ε1
≤ −m u(t, ω, u0 ) − uε1 (t, ω, u0 ) + ε1
≤ −mδ ln u(t, ω, u0 ) − ln uε1 (t, ω, u0 ) + ε1 .


(3.4)

Hence, in view of the comparison theorem,
u(t, ω, u0 ) − uε1 (t, ω, u0 ) ≤ M ln u(t, ω, u0 ) − ln uε1 (t, ω, u0 )
Mε1
1 − e−mδt

Mε1

∀t ≥ 0.




By the continuity of ai (x, y), we can find a γ = γ (ε1 ) such that whenever 0 ≤ y(s, ω) ≤ γ and
0 ≤ x(s, ω) ≤ M, then
a ξ(s, ω), x(s, ω), 0 − ε1 ≤ x(s,
˙ ω) = a ξ(s, ω), x(s, ω), y(s, ω)
≤ a ξ(s, ω), x(s, ω), 0 + ε1 .
By the comparison theorem, x(s, ω, x0 ) ≥ uε1 (s, ω, x0 ), ∀0 ≤ s ≤ t , which derives that
1
u(s, ω, x0 ) − x(s, ω, x0 ) ≤ Mε
mδ ∀0 ≤ s ≤ t . Analogously, we have x(s, ω, x0 ) − u(s, ω, x0 ) ≤
Mε1
Mε1
ε
mδ ∀0 ≤ s ≤ t . Choosing suitable ε1 = ε1 (ε) and γ such that γ + mδ < L , we have the claim.
The proof of Lemma 3.3 is complete. ✷
Now, we are in a position of proving Theorem 3.1.
Proof of Theorem 2.1. To simplify the notation, denote

z0 = (x0 , y0 ) and

z(t, ω, z0 ) = x(t, ω, x0 , y0 ), y(t, ω, x0 , y0 ) .

(3.5)

Since ξ(t + s, ω) = ξ(t, θ s ω) ∀t, s > 0, we have
z(t + s, ω, z0 ) = z t, θ s ω, z(s, ω, z0 )

∀t, s > 0.

(3.6)


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.10 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

10

Fix a T > T3 such that T1 (ln M − ln δ) < ε. Put χn (ω) = 1A (θ nT ω), where A is as in Lemma 3.2
and 1A (·) is the indicator function. {χn } is obviously a stationary process since θ T is a measurepreserving transformation (see [9, Section 16.4]). We now prove that θ T is ergodic with respect to (Ω, F, {Ft }t≥0 , P). Since F∞ ⊂ F and A ∈ F∞ , where F∞ = {Ft : t ≥ 0}, χn is
F∞ -measurable. It thus suffices to prove that θ T is ergodic with respect to (Ω, F∞ , P), that
is, there is no set B ∈ F∞ satisfying 0 < P(B ∩ θ −T B) = P(B) < 1. Suppose that there is a set
B ∈ F∞ satisfying this almost invariant property. Since F∞ = σ ({Ft : t ≥ 0}), for any ε > 0, we
can find a set B ∈ Ft for some t such that P(B B ) < ε where B B = (B \ B ) ∪ (B \ B).
Let Mt be the space of functions from [0, t ] to M and Bt the cylindrical σ -algebra on Mt .
Since Ft is the σ -algebra generated by ξ(t), t ∈ [0, t ] and P-null sets, we can choose B to

be of the form B = {ξt (·) ∈ C } for some C ∈ Bt where ξt (h + ·) denotes the trajectory
of ξ(·) in [h, h + t ] for each h ≥ 0, that is ξt (h + t) = ξ(h + t) ∀t ∈ [0, t ]. Let n0 be so
large that n0 T > t and that |P{ξ(n0 T − t ) = i | ξ(0) = j } − pi | < ε ∀i, j ∈ M. We have
P(B ∩ θ −n0 T B ) = P{ξt (·) ∈ C , ξt (n0 T + ·) ∈ C }. Using the Markov property, we deduce
that
P ξt (·) ∈ C , ξt (n0 T + ·) ∈ C
= P ξt (·) ∈ C

P ξt (n0 T + ·) ∈ C

ξ(n0 T ) = i P ξ(n0 T ) = i ξ t = j

i,j ∈M

× P ξ t = j ξt ∈ C
≤ P ξt (·) ∈ C

P ξt (n0 T + ·) ∈ C

ξ(n0 T ) = i pi + ε

i∈M

≤ P ξt (·) ∈ C

P ξt (·) ∈ C

ξ(0) = i pi + 2ε

i∈M


≤ P ξt (·) ∈ C
≤ P B

2

2

+ 2ε

since P ξ(0) = i = pi

+ 2ε ≤ P(B) + ε

2

+ 2ε .

Since θ t preserves measures and P(B B ) < ε , it is easy to see that
P B ∩ θ −n0 T B ≤ P B ∩ θ −n0 T B + 2ε
≤ P(B) + ε

2

+ 4ε < P(B)

for sufficiently small ε . On the other hand, it follows from the property P(B ∩ θ −T B) = P(B)
that P(B ∩ θ −nT B) = P(B) ∀n ∈ N. This contradiction means that the transformation θ T is
ergodic. In view of the Birkhoff Ergodic theorem, (see [9, Theorem 16.14])


1
k→∞ k

k−1

χn = P(A)

lim

n=0

a.s.

(3.7)


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.11 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

11

Let
χn1 (ω) =

1 if χn (ω) = 1 and y(t, ω, z0 ) < γ ∀nT ≤ t ≤ (n + 1)T
0 otherwise,


χn2 (ω) =

1 if χn (ω) = 1 and x(t, ω, z0 ) < γ ∀nT ≤ t ≤ (n + 1)T
0 otherwise,

χn3 (ω) =

1 if χn (ω) = 1 and ∃nT ≤ t ≤ (n + 1)T : x(t, ω, z0 ), y(t, ω, z0 ) > γ
0 otherwise.

It follows from Remark 3.1 that χn = χn1 + χn2 + χn3 . For convenience, put χn4 = 1 − χn . By (3.6),
if χn1 = 1 then y(t + nT , ω, z0 ) < γ ∀0 ≤ t ≤ T (or y(t, θ nT ω, z(nT , ω)) < γ ∀0 ≤ t ≤ T ),
which is combined with Remark 3.1 to obtain that δ ≤ x(nT , ω, z0 ) ≤ M. Moreover, χn1 = 1
implies χn (ω) = 1, i.e., θ nT ω ∈ A. Thus, it follows from Proposition 3.1 and Lemma 3.3 that
T

1
T

b ξ t, θ nT ω , u t, θ nT ω, x(nT , ω, z0 ) , 0 dt − λ1 < ε,

(3.8)

0

and that
T

1
T


y t, θ nT ω, z(nT , ω, z0 ) dt
0
T

+

1
T

u t, θ nT ω, x(nT , ω, z0 ) − x t, θ nT ω, z(nT , ω, z0 ) ds <

ε
.
L

(3.9)

0

Using (3.6) again and the inequality |bi (x, y) − bi (u, 0)| ≤ L(|x − u| + y), for all x, y, u ∈
(0, M], we have
1
ln y (n + 1)T , ω, z0 − ln y(nT , ω, z0 )
T
(n+1)T

1
=
T


T

1
b ξ(t, ω), z(t, ω, z0 ) dt =
T
nT

b ξ(t + nT , ω), z(t + nT , ω, z0 ) dt
0

T

=

1
T

b ξ t, θ nT ω , z t, θ nT ω, z(nT , ω, z0 ) dt
0
T

1
=
T

b ξ t, θ nT ω , u t, θ nT ω, x(nT , ω, z0 ) , 0 dt
0
T


1
+
T

b ξ t, θ nT ω , z t, θ nT ω, z(nT , ω, z0 )
0


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.12 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

12

− b ξ t, θ nT ω , u t, θ nT ω, x(nT , ω, z0 ) , 0 dt
T

1

T

T

b ξ t, θ

nT

ω , u t, θ


nT

L
ω, x(nT , ω) , 0 )dt −
T

0

y t, θ nT ω, z(nT , ω, z0 ) dt
0

T

L

T

u t, θ nT ω, x(nT , ω, z0 ) − x t, θ nT ω, z(nT , ω, z0 ) dt.

(3.10)

0

Applying inequalities (3.8) and (3.9) to (3.10), we claim that if χn1 (ω) = 1 then
1
ln y(nT + T , ω, z0 ) − ln y(nT , ω, z0 ) ≥ λ1 − 2ε.
T

(3.11)


In case χn2 (ω) = 1, we have x(t, ω, x0 , y0 ) < γ ∀t ∈ [nT , (n + 1)T ]. Therefore, by Remark 3.1, we derive δ ≤ y(nT , ω, x0 , y0 ), y(nT + T , ω, x0 , y0 ) ≤ M. As a result,
1
1
ln y(nT + T , ω, x0 , y0 ) − ln y(nT , ω, x0 , y0 ) ≥ (ln δ − ln M) > −ε.
T
T

(3.12)

Let H = max{|ai (x, y)|, |bi (x, y)| : 0 ≤ x, y ≤ M, i ∈ M}. If χn3 (ω) = 1 or χn4 (ω) = 1, then
1
ln y(nT + T , ω, z0 ) − ln y(nT , ω, z0 )
T
1
=
T

nT +T

b ξ(t, ω), z(t, ω, z0 ) dt ≥ −H.

(3.13)

nT

Applying (3.10), (3.12), and (3.13) to the equation
1
ln y(nT + T , ω, x0 , y0 ) − ln y(nT , ω, x0 , y0 )
T

=

1
T

4

ln y(nT + T , ω, x0 , y0 ) − ln y(nT , ω, x0 , y0 ) χni ,
i=1

we have
1
ln y(nT + T , ω, z0 ) − ln y(nT , ω, z0 )
T
≥ (λ1 − 2ε)χn1 (ω) − εχn2 (ω) − H χn3 (ω) + χn4 (ω)
summing over from n = 0 to k − 1, and dividing by k, we obtain

(3.14)


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.13 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

13

1
ln y(kT , ω, z0 ) − ln y0 )

kT


1
(λ1 − 2ε)
k

k−1

k−1

χn1 (ω) − ε
n=0

k−1

χn2 (ω) − H
n=0

χn3 (ω) + χn4 (ω)

(3.15)

.

n=0

Consequently,

lim sup

k→∞

1
(λ1 − 2ε)
k

k−1

k−1

χn1 (ω) − ε
n=0

k−1

χn2 (ω) − H
n=0

χn3 (ω) + χn4 (ω)

≤0

a.s. (3.16)

χn3 (ω) + χn4 (ω)

≤0

a.s. (3.17)


n=0

Similarly, we can obtain

lim sup
k→∞

1
(λ2 − 2ε)
k

k−1

k−1

χn2 (ω) − ε
n=0

k−1

χn1 (ω) − H
n=0

n=0

Let λ = min{λ1 , λ2 }, adding side by side (3.16) and (3.17) yields
lim sup
k→∞

1

(λ − 3ε)
k

k−1

k−1

χn1 (ω) + χn2 (ω) − H
n=0

χn3 (ω) + χn4 (ω)

≤0

a.s.

(3.18)

n=0

Moreover, using (3.7), we have
1
− lim
k→∞ k

k−1

χn1 (ω) + χn2 (ω) + χn3 (ω)
n=0


1
= − lim
k→∞ k

k−1

χn (ω)
n=0

= −P(A) ≤ −(1 − ε),

(3.19)

and
1
k→∞ k

k−1

χn4 (ω) = 1 − P(A) ≤ ε.

lim

(3.20)

n=0

Multiplying both sides of (3.19) by (λ − 3ε) and multiplying both sides of (3.20) by H , then
adding to (3.18), we have


lim sup
k→∞

1
−(H + λ − 3ε)
k

k−1

χn3 (ω) ≤ −(λ − 3ε)(1 − ε) + H ε

a.s.,

n=0

or for ε sufficiently small,

lim inf
k→∞

1
k

k−1

χn3 (ω) ≥
n=0

(λ − 3ε)(1 − ε) − H ε
:= m > 0

H + λ − 3ε

a.s.

(3.21)


JID:YJDEQ AID:7505 /FLA

14

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.14 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

We can find a γ = γ (γ , T ) > 0 such that z(t, ω, z) ∈ [γ , M]2 ∀t ∈ [0, T ], z ∈ (0, M]2 , provided
that there is an s ∈ [0, T ] such that z(s, ω, z) ∈ [γ , M]2 . As a result, if χn3 = 1, then
nT +T

1{z(t,ω,z0 )∈[γ ,M]2 } dt = T .

(3.22)

nT

It follows from (3.21) and (3.22) that
t

1
lim inf

t→∞ t

1{z(s,ω,z0 )∈[γ ,M]2 } ds = m > 0 a.s.
0

In view of Fatou’s lemma,
t

1
lim inf
t→∞ t

P z(s, ω, z0 ) ∈ [γ , M]2 ds = m > 0.

(3.23)

0

Since the process (ξ(t), z(t)) = (ξ(t), x(t), y(t)) has the Feller property, (3.23) guarantees the
2,◦
; see [12] or [16]. ✷
existence of an invariant probability measure on M × R+
4. Ω-limit set and stability
We denote by πt1 (u, v) = (x1 (t, u, v), y1 (t, u, v)), (resp. πt2 (u, v) = (x2 (t, u, v), y2 (t, u, v))
the solution of Eq. (2.3) (resp. (2.4)) with initial value (u, v). The Ω-limit set of the trajectory
starting from an initial value (x0 , y0 ) is defined by
Ω(x0 , y0 , ω) =

x(t, ω, x0 , y0 ), y(t, ω, x0 , y0 ) .
T >0 t>T


We use the notation “Ω-limit set” in lieu of the usual one “ω-limit set” in dynamic systems to
avoid notational conflict. We still use ω as an element of the probability space. For simplicity,
we set
Xn = x(τn , x0 , y0 );
F0n = σ (τk : k ≤ n);

Yn = y(τn , x0 , y0 );
Fn∞ = σ (τk − τn : k > n).

It is clear that (Xn , Yn ) is F0n -measurable and if ξ(0) is given then F0n is independent of Fn∞ .
The following lemma can be proved using arguments similar to that of [3, Theorem 2.2] by noting
that the Lebesgue measure on R+ is absolutely continuous w.r.t. any exponential distribution.
Lemma 4.1. Let {ηn }∞
1 be a sequence of strictly increasing finite stopping times with respect to
filtration Fn . Suppose that A is a Borel subset of R+ with positive Lebesgue measure. Then, the
events An = {σηn +1 ∈ A} occur infinitely often a.s., that is,


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.15 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

15

∞ ∞

P


Ai = 1.
k=1 i=k

Lemma 4.2. If λ1 and λ2 are positive, we can find an h¯ > 0 such that with probability 1 the
events {h¯ ≤ X2k , Y2k ≤ M} as well as {h¯ ≤ X2k+1 , Y2k+1 ≤ M} occur infinitely often.
Proof. It is clear that (ξ(t), x(t), y(t)) is a Feller–Markov process with respect to the filtration
{Ft } hence it is also a strong Markov process. For a stopping time ζ , the σ -algebra at ζ is
Fζ = {A ∈ F∞ : A ∩ {ζ ≤ t} ∈ Ft ∀t ∈ R+ }. Fix T4 > 0, by [3, Theorem 2.1], we can define
almost surely finite stopping times
η1 = inf t > 0 : x(t) ≥ δ, y(t) ≥ δ ,
η2 = inf t > η1 + T4 : x(t) ≥ δ, y(t) ≥ δ ,
..
.
ηn = inf t > ηn−1 + T4 : x(t) ≥ δ, y(t) ≥ δ .
For a stopping time ζ , we write τ (ζ ) as the first jump time of ξ(t) after ζ , i.e., τ (ζ ) = inf{t >
ζ : ξ(t) = ξ(ζ )}. Let σˆ (ζ ) = τ (ζ ) − ζ and Ak = {σˆ (ηk ) < T4 }, k ∈ N. Then Ak+1 is in the
σ -algebra generated by {ξ(ηn+1 + s) : s ≥ 0} while Ak ∈ Fηk+1 . Therefore, in view of [4, Theorem 5, p. 59] and an analogue of [4, Theorem 1. d), p. 36] for strong Markov processes, we
obtain for Ack := Ω \ Ak , k ∈ N that
P Ack ξ(ηk ) = i = P σˆ (0) > T4 ξ(0) = i ≤ p
:= max P σˆ (0) > T4 ξ(0) = i
i∈M

<1

(4.1)

and that
P Ack+1 ∩ Ack ξ(ηk+1 ) = i = P Ack+1 ξ(ηk+1 ) = i P Ack ξ(ηk+1 ) = i .


(4.2)

We deduce from (4.1) and (4.2) that P(Ack ) ≤ p and P(Ack ∩ Ack+1 ) ≤ p2 ∀k ∈ N. Put h¯ =
ˆ Employmin{xi (t, u, v), yi (t, u, v) : t ∈ [0, T4 ], hˆ ≤ u, v ≤ M, i ∈ M}. Obviously, 0 < h¯ ≤ h.
ˆ
ing Lemma 4.1, we can prove that the events {h ≤ Xn , Yn ≤ M, h¯ ≤ Xn+1 , Yn+1 ≤ M} ⊃ {hˆ ≤
Xn , Yn ≤ M, σn+1 ≤ T4 } occurs infinitely often. The assertion of this lemma follows. ✷
In our previous work, to describe the Ω-limit set of solutions to (2.2) using [3, Theorem 2.2],
we need to suppose that either of the two Eqs. (2.3) and (2.4) has a globally stable positive equilibrium while the other must be globally stable or bistable or extinctive (see [3, Assumptions 2.3,
2.3 and 2.5]). With Lemma 4.2, we can get the same conclusion with only an assumption that
either (2.3) or (2.4) has a globally stable positive equilibrium.
Theorem 4.1. Suppose that Assumptions 2.1 and 2.2 are satisfied, that λ1 > 0, λ2 > 0, and that
system (2.3) has a globally stable positive equilibrium (x1∗ , y1∗ ). Let


JID:YJDEQ AID:7505 /FLA

16

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.16 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••
(n)

S = (x, y) = πtn

◦ · · · ◦ πt1

(1)


x1∗ , y1∗ : 0 < t1 , t2 , . . . , tn ; n ∈ N ,

(4.3)

where (k) = 1 if k is even, otherwise (k) = 2. Then
a) With S denoting the closure of S, S is a subset of the Ω-limit set Ω(x0 , y0 , ω) with probability 1.
b) If there exists a t0 > 0 such that the point (x 0 , y 0 ) = πt20 (x1∗ , y1∗ ) satisfying the following
condition
det

a1 (x 0 , y 0 )
b1 (x 0 , y 0 )

a2 (x 0 , y 0 )
b2 (x 0 , y 0 )

= 0.

(4.4)

2,◦
Then, S absorbs all positive solutions in the sense that for any initial value (x0 , y0 ) ∈ R+
,
the value γ˜ (ω) = inf{t > 0 : (x(s, ω, x0 , y0 ), y(s, ω, x0 , y0 )) ∈ S ∀s > t} is finite outside
2,◦
with
a P-null set. Consequently, S is the Ω-limit set Ω(x0 , y0 , ω) for any (x0 , y0 ) in R+
probability 1.

Moreover, we can characterize the invariant probability measure by the following theorem.

Theorem 4.2. Suppose that Assumptions 2.1 and 2.2 are satisfied, that λ1 > 0, λ2 > 0, and that
system (2.3) has a globally stable positive equilibrium (x1∗ , y1∗ ). Assume further that there exists a
t0 > 0 such that the point (x 0 , y 0 ) = πt20 (x1∗ , y1∗ ) satisfying (4.4). Then the stationary distribution
2,◦
and for any initial
π ∗ has a density f ∗ with respect to the Lebesgue measure m on M × R+
distribution, the distribution of (ξ(t), x(t), y(t)) converges to π ∗ in total variation.
Before giving the proof, we recall some concepts in [11] to be used to prove the convergence in total variation. Let X be a locally compact, separable metric space, B(X) the Borel
σ -algebra on X. Let Φ = {Φt : t ≥ 0} be a homogeneous Markov process with state space X,
B(X) and transition semigroup P (t, x, ·). We can consider the process Φ on a probability space
(Ω, F, {Px }x∈X ) where the measure Px satisfies Px (Φt ∈ A) = P (t, x, A) for all x ∈ X, t ≥ 0,
A ∈ B(X). Suppose further that Φ is a Borel right process. Note that if Φ is a Feller process then
it is a Borel right process; see [15] for the definition of a Borel right process and the aforesaid
implication. For B ∈ B(X), define


τB = inf{t ≥ 0 : Φt ∈ B},

ηB =

1{Φt ∈B} dt.
0

Φ is said to be Harris recurrent if either of the following equivalent conditions is satisfied.
(H1) There is a nontrivial σ -finite measure ϕ1 such that Px (τB < ∞) = 1 ∀x whenever
ϕ1 (B) > 0.
(H2) There is a nontrivial σ -finite measure ϕ2 such that Px (ηB = ∞) = 1 ∀x whenever
ϕ2 (B) > 0.
Φ is positive Harris recurrent if it is Harris recurrent and it has an invariant probability measure.
For a probability measure a on R+ , we define a sampled Markov transition function Ka of Φ by



JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.17 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

17



Ka (x, B) =

P (t, x, B)a(dt).
0

Ka is said to possess a nowhere-trivial continuous component if there is a kernel T :
(X, B(X)) → R+ satisfying
• For each B ∈ B(X), the function T (·, B) is lower semi-continuous.
• For any x ∈ X, T (x, ·) is a non-trivial measure satisfying Ka (x, B) ≥ T (x, B) ∀B ∈ B(X).
Φ is called a T -process if for some probability measure a, the corresponding transition function
Ka admits a nowhere-trivial continuous component.
Proof of Theorem 4.2. It suffices to prove the convergence in total variation since the other
assertions of this theorem follow from Theorem 3.1 and [3, Theorem 3.1]. To proceed, we will
prove that our process is a positive Harris recurrent T -process. Let P (t, i, z, E) be the transition
2,◦
and E belongs to the
probability function of (ξ(t), x(t), y(t)), where t ∈ R+ , (i, z) ∈ M × R+
2,◦




Borel σ -algebra B(M × R+ ). Denote z1 = (x1 , y1 ) and z0 = (x 0 , y 0 ). By the existence and
continuous dependence of solutions on initial conditions, there exists a small positive number c
such that ϕ(s, t) = πs1 ◦ πt2 (z0 ) is defined and continuously differentiable in (−3c, 3c)2 . Since
det

∂ϕ ∂ϕ
,
∂s ∂t

= x 0 y 0 det
(0,0)

a1 (x 0 , y 0 )
b1 (x 0 , y 0 )

a2 (x 0 , y 0 )
b2 (x 0 , y 0 )

=0

2,◦
∂ϕ
2
we can suppose that det( ∂ϕ
∂s , ∂t ) = 0 ∀(s, t) ∈ (−3c, 3c) . Let K = t0 + 2c. For z ∈ R+ ,
1
◦ πs22 ◦ πs11 (z) in the domain W := {(s1 , s2 ) :

we consider a function ψz (s1 , s2 ) = πK−s
1 −s2
1
◦ πs22 (z1∗ ) =
0 < s1 , t0 − c < s2 , s1 + s2 < K}. In particular, we have ψz1∗ (s1 , s2 ) = πK−s
1 −s2

ϕ(K − s1 − s2 , s2 − t0 ). Therefore, det(

∂ψz∗

∂ψz∗

∂s1 , ∂s2 )|(c,t0 ) = 0. Thus, there exists an ε > 0 such
∂ψz ∂ψz

that det( ∂s1 , ∂s2 )|(c,t0 ) = 0 for all z ∈ Uε . For each z ∈ Uε∗ , there exists a neighborhood Wz of
(c, t0 ) such that ψz is a diffeomorphism between Wz and ψz (Wz ). Since ψz is also continuously
1

1

differentiable with respect to z, by modifying slightly the inverse function theorem, when ε is
sufficiently small, we can choose Wz ⊂ W such that W = ψz (Wz ) is the same for all z ∈ Uε∗ and
that
d1 :=

inf

z∈Uε∗ ,w∈W


Jz (w) > 0

where Jz (w) is the determinant of the Jacobian matrix of ψz−1 at w. Let f (s, t) is the density
function of (σ1 , σ2 ) provided that ξ(0) = 1. Recall that given ξ(0) = 1, σ1 , σ2 are independent
exponential variables, so f (s, t) is a smooth function and d2 := inf(s,t)∈W f (s, t) > 0. Let d3 :=
P{σ3 > K | ξ(0) = 1} > 0. It is easy to see that for any z ∈ Uε∗ and any Borel set B ⊂ W ,
P K, i, z, {1} × B ≥ P (σ1 , σ2 ) ∈ ψz−1 (B), σ3 > K ξ(0) = 1
≥ d3
Wz

f (s, t)1ψ −1 (B) (s, t)dsdt
z


JID:YJDEQ AID:7505 /FLA

18

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.18 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

≥ d2 d3

1ψ −1 (B) (s, t)dsdt
z

Wz


≥ d2 d3

1B (w1 , w2 ) Jz (w1 , w2 ) dw1 dw2

W

≥ d1 d2 d3 m(B)
˜
= d4 m {1} × B ,

(4.5)

2,◦
where m(·)
˜ is the Lebesgue measure on R+
and d4 is some positive constant. In the proof of [3,
2,◦
, there is with
Theorem 2.2], we claim that for any initial value (i, z0 ) = (i, x0 , y0 ) ∈ M × R+

probability 1 a sequence sn (ω) ↑ ∞ such that ξ(sn ) = 1, (x(sn ), y(sn )) ∈ Uε . Combining this
property and (4.5), it follows from the strong Markov property of (ξ(t), x(t), y(t)) that if B ⊂ W
and m({1} × B) > 0, then (ξ(t), x(t), y(t)) will enter {1} × B at some moment t = t (ω) almost
surely. It means that the condition (H1) is satisfied with the measure φ1 (E) = m(E ∩ ({1} × B))
2,◦
for E ∈ B(M × R+
). Thus, (ξ(t), x(t), y(t)) is positive Harris recurrent.
Since z1∗ is a globally asymptotically stable equilibrium of (2.3), for each k ∈ N, there is some
nk ∈ N such that πn1k K (z) ∈ Uε∗ and πn1k K−s ◦ πs2 (z) ∈ Uε∗ for every z ∈ [k −1 , k]2 , 0 ≤ s ≤ K.
We can choose {nk } to be an increasing sequence. Putting pk = min{P{σ1 > nk K | ξ(0) = 1},

P{σ1 < K, σ1 + σ2 ≥ nk K | ξ(0) = 2}}, we have

P nk K, i, z, {1} × Uε∗ ≥ pk

∀i ∈ M, z ∈ k −1 , k .
2

(4.6)

Exploiting the Kolmogorov–Chapman equation, it follows from (4.5) and (4.6) that
P (nk + 1)K, i, z, {1} × B ≥ pk d4 m {1} × B

∀i ∈ M, for any Borel set B ⊂ W . (4.7)

Let a is a probability measure on R+ given by a(nK) = 2−n , n ∈ N. Consider the following
−n
Markov transition function Ka (i, z, E) = ∞
n=1 2 P (nK, i, z, E). Define the kernel T : (M ×
2,◦
2,◦
R+ , B(M × R+ )) → R+ by
T (i, z, E) = 2−nk+1 −1 pk+1 d4 m E ∩ {1} × W

if z ∈ (k + 1)−1 , k + 1

2

\ k −1 , k , k ∈ N.
2


2,◦
Hence, it follows from (4.7) that Ka (i, z, E) ≥ T (i, z, E) ∀E ∈ B(M × R+
). Moreover,
2,◦
for each E ∈ B(M × R+ ), T (i, z, E) is a lower-semi continuous function. As a result,
(ξ(t), x(t), y(t)) is a T -process. Applying [11, Theorems 3.2(ii) and 8.1(ii)], we conclude that
2,◦
P (t, i, z, ·) − π ∗ (·) → 0 ∀(i, z) ∈ M × R+
where · is the total variation norm. (Note that
for a positive Harris recurrent process, the invariant transition function Π(x, ·) in [11, Theorem 8.1] turns out to be a unique invariant probability measure. For our process, it is π ∗ .) ✷

5. Dynamics of the system when each species dominates one state
This section considers the case that one of the two species dominates a (discrete) state (e.g.,
in dry season), but it becomes weaker and loses the dominance when the environment changes
(e.g., in rainy season). In the mathematical modeling, we can suppose that the first species tends
to be vanished in one state meanwhile the second species eventually dies out in the other state.


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.19 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

19

The interesting question is that what is the behavior of the system if the states switch randomly
from one to the other. We describe this situation by the following assumption.
Assumption 5.1. (0, v1 ) (reps (u2 , 0)) is a saddle point of system (2.3) (resp. (2.4)) while (u1 , 0)
(resp. (0, v2 )) is stable. Moreover, all positive solutions to (2.3) (resp. (2.4)) converge to the

stable equilibrium (u1 , 0) (resp. (0, v2 )).
Remark 5.1. (0, v1 ) (resp. (u2 , 0)) is a saddle point of system (2.3) (resp. (2.4)) if and only if
a1 (0, v1 ) > 0 (resp. b2 (u2 , 0) > 0).
By the center manifold theorem and the attractivity of (u1 , 0) and (0, v2 ), there exists a
(x1 , y1 ) such that the solution starting at (x1 , y1 ) of (2.3) can expand to the whole real line
and
lim π 1
t→∞ t

x1 , y1 = (u1 , 0)

and

lim π 1
t→−∞ t

x1 , y1 = (0, v1 )

(5.1)

lim π 2
t→−∞ t

x2 , y2 = (u2 , 0).

(5.2)

similarly, there exists a point (x2 , y2 ) such that
lim π 2
t→∞ t


x2 , y2 = (0, v2 ) and

Denote by Γ1 and Γ2 their orbits, respectively.
Let a, b (a > 0, b < 0) be the eigenvalues of the Jacobian matrix of (xa1 (x, y), yb1 (x, y))T
at the saddle point (0, v1 ), it follows from the Hartman–Grobman theorem and a suitable linear
transformation of coordinates that for some sufficient small δ1 > 0, there exists a homeomorphism φ from Bδ1 = {(x, y) : x 2 + y 2 < δ1 } onto an open neighborhood of (0, v1 ) such that
1. φ(0, 0) = (0, v1 ), φ(0, ϑ) ∈ y-axis ∀|ϑ| < δ1 ; φ(ς, 0) ∈ Γ1 ∀0 < ς < δ1 .
2,◦
2. φ(ς, ϑ) ∈ R+
∀ς ≥ 0, (ς, ϑ) ∈ Bδ1 .
3. If π(t, ς, ϑ) is a local solution to the system ς˙ = aς, ϑ˙ = bϑ, ς ≥ 0 satisfying π(t, ς, ϑ) ∈
Bδ1 , then φ(π(t, ς, ϑ)) is a solution to (2.3).
Since limt→−∞ πt1 (x 1 , y 1 ) = (0, v1 ) ∀(x 1 , y 1 ) ∈ Γ1 and φ(ς, 0) ∈ Γ1 ∀0 < ς < δ1 , for any
neighborhood V of (x 1 , y 1 ) ∈ Γ1 , there are 0 < ε < σ0 < δ1 such that πt1 (x, y) will eventually go through V if (x, y) ∈ φ(Bε (ς0 , 0)) where Bε (ς0 , 0) = { (ς, ϑ) − (ς0 , 0) < ε}. It
is clear that for any ς ∈ (0, δ1 ) and 0 < ε < δ1 − ς0 , the solution (ς(t), ϑ(t)) starting at
(ς, ϑ) ∈ (0, ς0 ) × (−ε, ε) must visit Bε (ς0 , 0) = { (ς, ϑ) − (ς0 , 0) < ε} at some t > 0. Hence,
for (x, y) ∈ Vς0 ,ε = φ((0, ς0 ) × (−ε, ε)), we have {πt2 (x, y) : t ≥ 0} ∩ φ(Bε (ς0 , 0)) = ∅. Moreover, since πt1 (0, v2 ) → (0, v1 ) as t → ∞, we can exploit the continuous dependence of so2,◦
lutions on initial values to show that {πt1 (x, y) : t ≥ 0} ∩ Vς0 ,ε = ∅ when (x, y) ∈ R+
is
sufficiently close to (0, v2 ). Combining the above comments yields that there is a δ2 > 0 such
that {πt2 (x, y) : t ≥ 0} ∩ V = ∅ if 0 < x < δ2 and |y − v2 | ≤ δ2 . This fact is demonstrated in
Fig. 1. Furthermore, if 0 < δ3 ≤ x ≤ δ2 and |y − v2 | ≤ δ2 , we can estimate an upper bound for
the time πt1 (x, y) enters V using the compactness of {(x, y) : δ3 ≤ x ≤ δ2 , |y − v2 | ≤ δ2 } and
the continuous dependence of solutions on initial values again. To sum up, we can obtain the
following lemma.


JID:YJDEQ AID:7505 /FLA


[m1+; v 1.193; Prn:5/06/2014; 13:46] P.20 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

20

Fig. 1. Saddle equilibrium for (2.3) and its linearized form.

Lemma 5.1. For any neighborhood V of (x 1 , y 1 ) ∈ Γ1 , there exists a δ2 > 0 such that
• For any (x, y) satisfying 0 < x < δ2 , v2 − δ2 < y < v2 + δ2 , we have {πt1 (x, y) : t ≥ 0} ∩ V =
∅.
• For any 0 < δ3 < δ2 , there is a t3 > 0 such that πt1 (x, y) ∈ V for some t ∈ [0, t3 ], provided
that δ3 < x < δ2 , v2 − δ2 < y < v2 + δ2 .
For simplicity, in this section, we suppose that ξ(0) = 1.
Lemma 5.2. For any δ2 > 0, there is a 0 < δ3 < δ2 and infinitely many integers k = k(ω) such
that δ3 ≤ X2k ≤ δ2 and v2 − δ2 < Y2k < v2 + δ2 almost surely.
Proof. In view of Lemma 4.2, with probability 1, there are infinitely many positive integers k
such that h¯ ≤ X2k+1 , Y2k+1 ≤ M. Since (0, v2 ) is a stable equilibrium of the system (2.4) and
it attracts all positive solutions, we can find a t4 > 0 satisfying that πt2 (x, y) − (0, v2 ) < δ2
for all t > t4 . Obviously, δ3 := inf{x2 (t, x, y) : h¯ ≤ x, y ≤ M, t4 ≤ t ≤ 2t4 } > 0. It follows from
Lemmas 4.1 and 4.2 that the events {h¯ ≤ X2k+1 , Y2k+1 ≤ M, t4 ≤ σ2k+2 < 2t4 } occur infinitely
often with probability 1. Note that whenever these events occur, we have δ3 ≤ X2k+2 ≤ δ2 ,
v2 − δ2 ≤ Y2k+2 ≤ v2 + δ2 . The lemma is therefore proved. ✷
Lemma 5.3. Γ1 , Γ2 ⊂ Ω(x0 , y0 , ω) a.s.
Proof. Let (x 1 , x 2 ) be arbitrarily in Γ1 and δ2 , δ3 be determined as in Lemma 5.1 and
Lemma 5.2, respectively. Define almost surely finite stopping times
ζ1 = inf{2k : δ3 ≤ X2k ≤ δ2 , v2 − δ2 ≤ Y2k ≤ v2 + δ2 },
ζ2 = inf{2k > ζ1 : δ3 ≤ X2k ≤ δ2 , v2 − δ2 ≤ Y2k ≤ v2 + δ2 },
..
.

ζn = inf{2k > ζn−1 : δ3 ≤ X2k ≤ δ2 , v2 − δ2 ≤ Y2k ≤ v2 + δ2 }.


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.21 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

21

Let t3 be defined as in Lemma 5.1. Put Ck = {σζk +1 > t3 }. Applying Lemma 4.1 again, we
conclude that there are infinitely many k ∈ N such that σζ k +1 > t3 . It follows from Lemma 5.1
that if Ck occurs, there is an sk ∈ [τζk , τζk +1 ] such that (x(sk ), y(sk )) = πs1k (Xζk , Yζk ) ∈ V . Since
limk→∞ τζk = ∞, sk → ∞ as k → ∞. As a result, (x 1 , y 1 ) ∈ Ω(x0 , y0 , ω) a.s. ✷
Moreover, we can prove a stronger result that enable us to describe completely the Ω-limit
set.
Lemma 5.4. For any open neighborhood V of (x 1 , y 1 ) ∈ Γ1 , there are infinitely many k’s such
that (X2k+1 , Y2k+1 ) ∈ V .


Proof. It is analogous to that of Lemma 4.2.
Similar to the notation in (4.3), we define
(n)

S1 = (x, y) = πtn

◦ · · · ◦ πt2

(2)


◦ πt1

(1)

(u, v) :

(u, v) ∈ Γ1 , 0 ≤ t0 , t1 , t2 , . . . , tn ; n ∈ N ,

(5.3)

where (k) = 1 if k is even, otherwise (k) = 2.
Theorem 5.1. Suppose that λ1 , λ2 > 0 and Assumption 5.1 holds. Then S1 is contained in the
Ω-limit set Ω(x0 , y0 , ω) a.s. Further, if there is a (x , y ) ∈ Γ1 such that
det

a1 (x , y )
b1 (x , y )

a2 (x , y )
b2 (x , y )

= 0,

(5.4)

2,◦
then, S1 absorbs all positive solution in the sense that for any initial value (x0, y0 ) ∈ R+
, the
value γ˜ (ω) = inf{t > 0 : (x(s, x0 , y0 , ω), y(s, x0 , y0 , ω)) ∈ S1 (resp. S2 ) ∀s > t} is finite outside

a P-null set. Consequently, the closure S 1 of S1 is the Ω-limit set Ω(x0 , y0 , ω) for any (x0 , y0 )
2,◦
a.s.
in R+

Proof. It is proved using Lemma 5.4 and the arguments as in the proof of [3, Theorem 2.2].



Remark 5.2. If we define
(n+1)

S2 = (x, y) = πtn

◦ · · · ◦ πt2

(3)

◦ πt1

(2)

(u, v) : (u, v) ∈ Γ2 , 0 ≤ t0 , t1 , t2 , . . . , tn ; n ∈ N ,

then we have the same conclusion that the closure S 2 of S2 is the Ω-limit set Ω(x0 , y0 , ω) for
2,◦
with probability 1. As a result S 1 = S 2 .
any (x0 , y0 ) in R+
We now consider the existence and properties of an invariant probability measure. In order to
do that we need an additional assumption on the Lie algebra of vector fields.

2,◦
is said to satisfy Condition H if vectors w1 (z) −
Definition 5.2. A point z = (x, y) ∈ R+
w2 (z), [w1 , w2 ](z), [w1 , [w1 , w2 ]](z), . . . span R2 where wi (x, y) = (xai (x, y), ybi (x, y)) and
[·, ·] is the Lie bracket.


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.22 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

22

Theorem 5.3. Suppose that λ1 > 0, λ2 > 0 and Assumption 5.1 holds. Suppose further that
there is a (x , y ) ∈ Γ1 (resp. Γ2 ) such that (5.4) holds. Moreover, there exists a (x , y ) ∈ S1
(resp. S2 ) satisfying condition H. Then the invariant probability measure π ∗ of (ξ(t), x(t), y(t))
2,◦
has a density f ∗ with respect to the Lebesgue measure on M × R+
and is concentrated on
M × S1 , (resp. M × S2 ). Moreover, for any initial value, the distribution of (ξ(t), x(t), y(t))
converges in total variation to π ∗ .
Proof. Suppose that there is a point (x , y ) ∈ Γ1 assuming (5.4) and that there exists a
(x , y ) ∈ S1 satisfying condition H. Theorems 3.1 and 5.1 declare the existence of an invariant probability measure concentrated on M × S1 . In view of the two claims given below, the
absolute continuity of π ∗ with respect to Lebesgue measure is proved using arguments in the
proof of [3, Proposition 3.1]. Finally, similar to that of Theorem 4.2, the convergence in total
variation of (ξ(t), x(t), y(t)) is proved. ✷
Claim 1. Any point of M × S1 is approachable from all other points. In other word, for any
(i, z1 ), (j, z2 ) ∈ M × S1 , and any neighborhood {j } × Vz2 of (j, z2 ) there exist s1 , s2 , . . . , sn ≥ 0

such that πslnn ◦ · · · ◦ πsl22 ◦ πsl11 (z1 ) ∈ Vz2 where lk ∈ M, l1 = i, ln = j .
Proof. By the definition of S1 , we only need to prove that if (j, z2 ) ∈ Γ1 × M, (j, z2 ) is approachable from any (i, z1 ) ∈ M × S1 . Indeed, for an open neighborhood Vz2 of z2 ∈ Γ1 , it
follows from Lemma 5.1 that there is a δ2 > 0 such that if z ∈ (0, δ2 ) × (v2 − δ2 , v2 + δ2 ),
there is a t 1 (z) such that πt11 (z) (z) ∈ Vz2 . Moreover, since (0, v2 ) attracts all positive solutions to
Eq. (2.4), for all z ∈ R12,◦ , there is t 2 (z) > 0 such that πt22 (z) (z) ∈ (0, δ2 ) × (v2 − δ2 , v2 + δ2 ). As
a result, π 11 ◦ π 21 (z1 ) ∈ Vz2 and π 12 ◦ π 22 ◦ π 12 (z1 ) ∈ Vz2 where s11 = t 2 (z1 ), s21 = t 1 (π 21 (z1 ))
s2

s1

s3

s2

s1

s1

and s12 > 0, s22 = t 2 (π 12 (z1 )), s32 = t 1 (π 22 ◦ π 12 (z1 )). which means that (1, z2 ) is approachable.
s1

s2

s1

Note that if z ∈ Vz2 , then πs2 (z) ∈ Vz2 when s is sufficiently small. Consequently, (2, z2 ) is also
approachable. ✷
m
Claim 2. There exists positive numbers K, sˆ1 , . . . , sˆm such that
i=1 sˆi < K and that eiρ(m+1)

ρ(m)
ρ(1)
ρ(m+2)
1
2
ther ϕ (s1 , . . . , sm ) = πK− m s ◦ πsm ◦ · · · ◦ πs1 (x , y ) or ϕ (s1 , . . . , sm ) = πK− m s ◦
ρ(m+1)

πs m

ρ(2)

i=1 i

i=1 i

◦ · · · ◦ πs1 (x , y ) has the Jacobian matrix at (ˆs1 , . . . , sˆm ) of rank 2.

Proof. Note that condition H is normally called the hypo-ellipticity condition in Hormander’s
theory and Claim 2 is a classical result from the geometrical control theory which can be found
on [8, Chapter 3]. ✷
Example 5.4. We illustrate our results by plotting a sample orbit of the system
x(t)
˙ = x a ξ(t) − b ξ(t) x − c ξ(t) y
y(t)
˙ = y d ξ(t) − e ξ(t) x − f ξ(t) y ,

(5.5)

where a1 = 6, b1 = 3, c1 = 2, d1 = 12, e1 = 4, f1 = 3, a2 = 12, b2 = 3.6, c2 = 2.2, d2 = 9.6,

e2 = 4, f2 = 2, x(0) = 2, y(0) = 1, α = 3, β = 3. By calculation, λ1 ≈ 0.13, λ2 ≈ 0.963. The


JID:YJDEQ AID:7505 /FLA

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.23 (1-24)

N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

23

Fig. 2. Phase portraits of system (5.5) under the first set of parameters.

Fig. 3. Phase portraits of system (5.5) under another set of parameters.

results of Theorems 5.1 and 5.3 hold for this case since their hypothesis can be verified easily.
The sample orbit eventually provides us with the image of the Ω-limit set; see Fig. 2.
Example 5.5. In this example, we still consider (5.5) with different parameters. We use a1 = 4,
b1 = 1, c1 = 2, d1 = 3, e1 = 1, f1 = 3, a2 = 3, b2 = 3, c2 = 1, d2 = 4, e2 = 2, f2 = 1, x(0) = 1.5,
y(0) = 2, α = 1, β = 1. Then λ1 and λ2 are calculated with λ1 = 0.181 and λ2 = 0.178, respectively. Fig. 3 provides a sample orbit of the solution.
6. Conclusion
Under simple conditions, we have established the existence of stationary distributions for
Kolmogorov type systems under telegraph noise. Such ergodicity results will be of essential


JID:YJDEQ AID:7505 /FLA

24

[m1+; v 1.193; Prn:5/06/2014; 13:46] P.24 (1-24)


N.H. Dang et al. / J. Differential Equations ••• (••••) •••–•••

utility in many applications of two-dimensional systems under random environment, especially
for ecology systems. Further asymptotic results have also obtained for the associated Ω-limit
sets. Although only a two-state Markov chain (the telegraph noise) is considered, it appears that
these results can be generalized to systems perturbed by a Markov chain of more than two states
without much difficulty. Our results are based on the sufficient condition that λ1 > 0 and λ2 > 0.
For future study, interesting and important questions to be addressed include: What are necessary
conditions for the existence of the stationary distribution? What can one say about the associated
Ω-limit sets. Moreover, it seems possible to obtain similar results for the case when transition
intensities α and β are state-dependent. However, besides modifications, some new techniques
may need to be introduced to treat that problem. These questions and comments deserve more
careful thoughts and consideration.
References
[1] J. Azéma, M. Kaplan-Duflo, D. Revuz, Mesure invariante sur les classes récurrentes des processus de Markov
(French) Z. Wahrscheinlichkeitstheor. Verw. Geb. 8 (1967) 157–181.
[2] A.D. Bazykin, Nonlinear Dynamics of Interacting Populations, World Scientific, Singapore, 1998.
[3] N.H. Du, N.H. Dang, Dynamics of Kolmogorov systems of competitive type under the telegraph noise, J. Differential Equations 250 (2011) 386–409.
[4] I.I. Gihman, A.V. Skorohod, The Theory of Stochastic Processes II, Springer-Verlag, Berlin, 1979.
[5] T.W. Hwang, Global analysis of the predator-prey system with Beddington DeAngelis functional response, J. Math.
Anal. Appl. 281 (2003) 395–401.
[6] T.W. Hwang, Uniqueness of limit cycles of the predator-prey system with Beddington–DeAngelis functional response, J. Math. Anal. Appl. 290 (2004) 113–122.
[7] J. Hofbauer, K. Sigmund, Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge,
1998.
[8] V. Jurdjevic, Geometric Control Theory, Cambridge Stud. Adv. Math., vol. 52, Cambridge University Press, 1997.
[9] L.B. Koralov, Y.G. Sinai, Theory of Probability and Random Processes, 2nd ed., Springer, Berlin, 2007.
[10] J.D. Murray, Mathematical Biology, Springer-Verlag, Berlin, 2002.
[11] S.P. Meyn, R.L. Tweedie, Stability of Markovian processes II. Continuous-time processes and sampled chains, Adv.
in Appl. Probab. 25 (1993) 487–517.

[12] S.P. Meyn, R.L. Tweedie, Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes, Adv. in Appl. Probab. 25 (1993) 518–548.
[13] K. Pichór, R. Rudnicki, Continuous Markov semigroups and stability of transport equations, J. Math. Anal. Appl.
249 (2000) 668–685.
[14] R. Rudnicki, K. Pichór, M. Tyran-Kaminska, Markov semigroups and their applications, in: P. Garbaczewski,
R. Olkiewicz (Eds.), Dynamics of Dissipation, in: Lecture Notes in Phys., vol. 587, Springer, Berlin, 2002,
pp. 215–238.
[15] M. Sharpe, General Theory of Markov Processes, Academic Press, New York, 1988.
[16] L. Stettner, On the existence and uniqueness of invariant measure for continuous time Markov processes, LCDS
Report No. 86-16, Brown University, Providence, April 1986.
[17] G. Yin, C. Zhu, Hybrid Switching Diffusions: Properties and Applications, Springer, New York, 2010.
[18] C. Zhu, G. Yin, On competitive Lotka–Volterra model in random environments, J. Math. Anal. Appl. 357 (2009)
154–170.



×