Tải bản đầy đủ (.pdf) (36 trang)

Networking Theory and Fundamentals - Lecture 3 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (343.54 KB, 36 trang )

1
TCOM 501:
Networking Theory & Fundamentals
Lecture 3
January 29, 2003
Prof. Yannis A. Korilis
3-2
Topics
 Markov Chains
 Discrete-Time Markov Chains
 Calculating Stationary Distribution
 Global Balance Equations
 Detailed Balance Equations
 Birth-Death Process
 Generalized Markov Chains
 Continuous-Time Markov Chains
3-3
Markov Chain
 Stochastic process that takes values in a countable set
 Example: {0,1,2,…,m}, or {0,1,2,…}
 Elements represent possible “states”
 Chain “jumps” from state to state
 Memoryless (Markov) Property: Given the present
state, future jumps of the chain are independent of
past history
 Markov Chains: discrete- or continuous- time
3-4
Discrete-Time Markov Chain
 Discrete-time stochastic process {X
n
: n = 0,1,2,…}


 Takes values in {0,1,2,…}
 Memoryless property:
 Transition probabilities P
ij
 Transition probability matrix P=[P
ij
]
111001
1
{|
, , ,
}{ | }
{|}
nnnn nn
ij n n
P
X
j
XiX i Xi PX
j
Xi
PPX jX i
+−− +
+
=== == ==
===
0
0, 1
ij ij
j

PP

=
≥=

3-5
Chapman-Kolmogorov Equations
 n step transition probabilities
 Chapman-Kolmogorov equations
is element (i, j) in matrix P
n
Recursive computation of
state
probabilities
{|},,0,,0
n
ij n m m
P
PX j X i nm ij
+
=== ≥≥
n
ij
P
0
,,0,,0
nm n m
ij ik kj
k
P

PP nm ij

+
=
=≥≥

3-6
State Probabilities – Stationary Distribution
 State probabilities (time-dependent)
 In matrix form:
 If time-dependent distribution converges to a limit
 π is called the
stationary distribution
Existence depends on the structure of Markov chain
1
11
00
{}{ }{ | }ππ
nn
nnnn
j
ii
j
ii
PX j PX iPX j X i P
∞∞

−−
==
== = = =⇒=

∑∑
01
π {},π (π ,π , )
nnnn
jn
PX j== =
122 0
ππ π π
nn n n
PP P
−−
== ==
π lim π
n
n→∞
=
ππ
P
=
3-7
Classification of Markov Chains
Aperiodic:
 State i is periodic:
 Aperiodic Markov chain: none
of the states is periodic
Irreducible:
 States i and j communicate:
 Irreducible Markov chain: all
states communicate
,: 0, 0

nm
ij ji
nm P P∃>>
1: 0
n
ii
dP nd
α
∃> >⇒=
0
3 4
21
0
3 4
21
3-8
Limit Theorems
Theorem 1: Irreducible aperiodic Markov chain
 For every state j, the following limit
exists and is independent of initial state
i
 N
j
(k): number of visits to state j up to time k
 π
j
: frequency the process visits state j
0
π lim { | }, 0,1,2,
jn

n
PX j X i i
→∞
====
0
()
π lim 1
j
j
k
Nk
PXi
k
→∞

=
==


3-9
Existence of Stationary Distribution
Theorem 2: Irreducible aperiodic Markov chain. There
are two possibilities for scalars:
1. π
j
= 0, for all states j No stationary distribution
2. π
j
> 0, for all states j π is the
unique

stationary
distribution
Remark:
If the number of states is finite, case 2 is the
only possibility
0
π lim { | } lim
n
j
nij
nn
PX
j
Xi P
→∞ →∞
====
3-10
Ergodic Markov Chains
 Markov chain with a stationary distribution
 States are positive recurrent: The process returns to
state j “infinitely often”
 A positive recurrent and aperiodic Markov chain is
called ergodic
 Ergodic chains have a unique stationary distribution
Ergodicity ⇒ Time Averages = Stochastic Averages
π 0, 0,1,2,
j
j>=
π lim
n

j
ij
n
P
→∞
=
3-11
Calculation of Stationary Distribution
A. Finite number of states
 Solve explicitly the system of
equations
 Numerically from P
n
which
converges to a matrix with
rows equal to
π
Suitable for a small number of
states
B. Infinite number of states
 Cannot apply previous methods
to problem of infinite dimension
 Guess a solution to recurrence:
0
0
ππ,0,1, ,
π 1
m
jiij
i

m
i
i
Pj m
=
=
==
=


0
0
ππ,0,1, ,
π 1
jiij
i
i
i
Pj

=

=
==
=


3-12
Example: Finite Markov Chain
 Markov chain formulation

 i is the number of umbrellas
available at her current location
 Transition matrix
Absent-minded professor uses
two umbrellas when commuting
between home and office. If it
rains and an umbrella is available
at her location, she takes it. If it
does not rain, she always forgets
to take an umbrella. Let p be the
probability of rain each time she
commutes. What is the
probability that she gets wet on
any given day?
0 2 1
1 p

p
1 p

1
p
001
01
10
Ppp
pp





=−







3-13
Example: Finite Markov Chain
p
1
001
01
10
P
pp
pp


=−




0 2 1
1 p

p

1 p−
20
02
112
012
12
1
0
ππ π
π (1 )π
ππ
π (1 )ππ
111
π , π , π
π 1
333
πππ1
i
i
p
P
pp
p
p
p
p
p
=−



=
=− +



⇔⇔===

=
−−−



++ =
=+


0
1
{gets wet} π
3
p
Ppp
p

==

3-14
Example: Finite Markov Chain
 Taking p = 0.1:
 Numerically determine limit of P

n
 Effectiveness depends on structure of P
001
00.90.1
0.9 0.1 0
P


=



()
111
π , , 0.310, 0.345, 0.345
333
p
ppp


==

−−−

0.310 0.345 0.345
lim 0.310 0.345 0.345 ( 150)
0.310 0.345 0.345
n
n
Pn

→∞


=≈



3-15
Global Balance Equations
 Markov chain with infinite number of states
 Global Balance Equations (GBE)
 is the frequency of transitions from j to i
 Intuition: j visited infinitely often; for each transition
out of
j there must be a subsequent transition into j
with probability 1
00
ππππ,0
jji iij jji iij
ii ijij
PP PPj
∞∞
== ≠≠
=⇔ = ≥
∑∑ ∑∑
π
j
ji
P
Frequency of Frequency of

transitions out of transitions into jj

=


3-16
Global Balance Equations
 Alternative Form of GBE
 If a probability distribution satisfies the GBE, then it is
the unique stationary distribution of the Markov chain
 Finding the stationary distribution:
 Guess distribution from properties of the system
 Verify that it satisfies the GBE
☺ Special structure of the Markov chain simplifies task
{}
ππ,0,1,2,
jji iij
jS iS iS jS
PPS
∈∉ ∉∈
=⊆
∑∑ ∑∑
3-17
Global Balance Equations – Proof
00
00
ππand 1
ππππ
j i ij ji
ii

jji iij jji iij
i i ij ij
PP
PP PP
∞∞
==
∞∞
== ≠≠
==⇒
=⇔ =
∑∑
∑∑ ∑∑
00 0 0
πππ π
πππ
ππ
jji iij jji iij
ii jSijSi
jji ji iij iij
jS iS iS jS iS iS
jji iij
jS iS iS jS
PP P P
PP P P
PP
∞∞ ∞ ∞
== ∈=∈=
∈∈ ∉ ∈∈ ∉
∈∉ ∉∈
=⇒ = ⇒

 
+= + ⇒
 
 
=
∑∑ ∑∑∑∑
∑∑ ∑ ∑∑ ∑
∑∑ ∑∑
3-18
Birth-Death Process
 One-dimensional Markov chain with transitions only
between neighboring states:
P
ij
=0, if |i-j|>1
 Detailed Balance Equations (DBE)
 Proof: GBE with S ={0,1,…,n} give:
0 1 n+1n2
,1nn
P
+
1,nn
P
+
,nn
P
,1nn
P

1,nn

P

01
P
10
P
00
P
SS
c
,1 1 1,
ππ 0,1,
nnn n n n
PPn
+++
=
=
,1 1 1,
01 01
ππππ
nn
jji iij nnn n n n
jin jin
P
PP P
∞∞
+++
==+ ==+
=⇒=
∑∑ ∑∑

3-19
Example: Discrete-Time Queue
 In a time-slot, one arrival with probability p or zero
arrivals with probability
1-p
 In a time-slot, the customer in service departs with
probability
q or stays with probability 1-q
 Independent arrivals and service times
 State: number of customers in system
0 1 n+1n2
(1 )
p

p
(1 )pq

(1 )(1 )
p
qpq−−+
(1 )qp−
(1 )qp

p
(1 )pq

(1 )(1 )
p
qpq


−+
(1 )qp

3-20
Example: Discrete-Time Queue
0 1 n+1n2
(1 )
p

p
(1 )pq

(1 )(1 )
p
qpq−−+
(1 )qp−
(1 )qp

(1 )pq

(1 )(1 )
p
qpq

−+
(1 )qp

(1 )pq−
11
(1 )

π (1 ) π (1 ) ππ,1
(1 )
nn n n
pq
pq q p n
qp
++

−= −⇒ = ≥

01 1 0
/
ππ(1 ) ππ
1
pq
pqp
p
=−⇒=

(1 )
Define: / ,
(1 )
pq
pq
qp
ρα

≡≡

10

1
0
1
π
1
ππ, 1
1
ππ, 1
n
n
nn
p
n
p
n
ρ
π
ρ
α
α

+

=


⇒= ≥




=≥

3-21
Example: Discrete-Time Queue
 Have determined the distribution as a function of π
0
 How do we calculate the normalization constant π
0
?
 Probability conservation law:
 Noting that
1
0
ππ, 1
1
n
n
n
p
ρ
α

=≥

11
1
0
01
1
π 1 π 11

1(1)(1)
n
n
nn
pp
ρρ
α
α


∞∞

==

 
=⇒ =+ =+

 
−−−

 
∑∑
()()()
ρα
−=

=





−=−− 1
)1(
)1()1(
111
q
pq
pq
qppq
pp
0
1
π 1
π (1 ) , 1
n
n
n
ρ
ραα

=−


=− ≥

3-22
Detailed Balance Equations
 General case:
 Imply the GBE
 Need not hold for a given Markov chain

 Greatly simplify the calculation of stationary distribution
Methodology:
 Assume DBE hold – have to guess their form
 Solve the system defined by DBE and Σ
i
π
i
= 1
 If system is inconsistent, then DBE do not hold
 If system has a solution {π
i
: i=0,1,…}, then this is the unique
stationary distribution
ππ, 0,1,
jji iij
PPij
=
=
3-23
Generalized Markov Chains
 Markov chain on a set of states {0,1,…}, that whenever
enters state i
 The next state that will be entered is j with probability P
ij
 Given that the next state entered will be j, the time it spends at
state
i until the transition occurs is a RV with distribution F
ij
 {Z(t): t ≥ 0} describing the state the chain is in at time t:
Generalized Markov chain

, or
Semi-Markov
process
It does not have the Markov property: future depends on
 The present state, and
 The length of time the process has spent in this state
3-24
Generalized Markov Chains
 T
i
: time process spends at state i, before making a
transition – holding time
 Probability distribution function of T
i
 T
ii
: time between successive transitions to i
 X
n
is the n
th
state visited. {X
n
: n=0,1,…}
 Is a Markov chain: embedded Markov chain
 Has transition probabilities P
ij
 Semi-Markov process irreducible: if its embedded
Markov chain is irreducible
00

()
{} { |
next state
}
()
ii i ijijij
jj
Ht PT t PT t
j
PFtP
∞∞
==
=≤= ≤ =
∑∑
0
[] ()
ii
ET tdH t

=

3-25
Limit Theorems
Theorem 3: Irreducible semi-Markov process, E[T
ii
] < ∞
 For any state j, the following limit
exists and is independent of the initial state.
 T
j

(t): time spent at state j up to time t
 p
j
is equal to the proportion of time spent at state j
lim { ( ) | (0) }, 0,1,2,
j
t
pPZtjZii
→∞
====
[]
[]
j
j
jj
ET
p
ET
=
()
lim (0) 1
j
j
t
Tt
Pp Z i
t
→∞

===




×