Tải bản đầy đủ (.pdf) (7 trang)

Ứng dụng chuỗi Markov để dự đoán xu hướng của thị trường

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (455.06 KB, 7 trang )

ỨNG DỤNG CHUỖI MARKOV ĐỂ DỰ ĐOÁN XU HƯỚNG
CỦA THỊ TRƯỜNG
Nguyễn Thị Hương
Trường Đại học Hà Nội
Tóm tắt - Khái niệm chuỗi Markov lần đầu tiên được đề cập bởi nhà Toán học người Nga
A.Markov (1856 - 1922) vào năm 1906. Ngày nay chuỗi Markov được ứng dụng rộng rãi trong
nhiều lĩnh vực khoa học khác nhau, như Thống kê, lý thuyết hàng chờ, logistics,…Chuỗi Markov
thường được dùng để mô phỏng hệ thống phục vụ trong mối liên hệ với các tiến trình Poison.
Trong lĩnh vực nghiên cứu nhận dạng tín hiệu, nhận dạng giọng nói. Trong sinh học, chuỗi
Markov được dùng để mã hoá, nghiên cứu về mối quan hệ giữa các vùng gen và dự đoán gen.
Trong kĩ thuật mô phỏng, phương pháp Monte-Carlo; chuỗi Markov được đánh giá là một trong
những phương pháp mô phỏng rất hữu hiệu bằng cách tạo ra những chuỗi số từ những số ngẫu
nhiên để phản ánh một cách chính xác các phân bố xác suất phức tạp, góp phần làm tăng tính
khả thi của phương pháp Bayes. Đặc biệt trong lĩnh vực kinh tế, có khá nhiều ứng dụng dựa
trên chuỗi Markov.
Đề tài này trình bày các khái niệm và tính chất của chuỗi Markov và đề cập đến một số
ứng dụng của chúng trong hoạt động kinh doanh, kinh tế thương mại như dự đoán các xu
hướng phát triển của thị trường.
Từ khóa - thời gian liên tục, khơng gian đếm được, thời gian rời rạc, thị trường, quá trình
ngẫu nhiên, ma trận chuyển tiếp.
Abstract - Markov chains are an important mathematical tool in stochastic
processes. The underlying idea is the Markov Property, in other words, that some
predictions about stochastic processes can be simplified by viewing the future as
independent of the past, given the present state of the process. This is used to simplify
predictions about the future state of a stochastic process.
This research will begin with a brief introduction, followed by the analysis, and end with
tips for further reading. The analysis will introduce the concepts of Markov chains, explain
different types of Markov Chains and present examples of its applications in finance.
Keyword— continuous time, countable space, discrete time, market, stochastic process,
transition matrix.


MARKOV CHAINS AND THEIR APPLICATIONS TO
PREDICT MARKET TREND
I. INTRODUCTION
1) Definition

63


A Markov process is a stochastic process that satisfies the Markov
property Markov. In simpler terms, a Markov process is a process for which one can
make predictions for its future based solely on its present state just as well as one could
knowing the process's full history. In other words, conditional on the present state of the
system, its future and past states are independent.
A Markov chain is a type of Markov process that has either a discrete state space
or a discrete index set (often representing time), but the precise definition of a Markov
chain varies. For example, it is common to define a Markov chain as a Markov process
in either discrete or continuous time discrete or continuous time with a countable state
space (thus regardless of the nature of time), but it is also common to define a Markov
chain as having discrete time in either countable or continuous state space.
2) Types of Markov chains
The system's state space and time parameter index need to be specified. The
following table gives an overview of the different instances of Markov processes for
different levels of state space generality and for discrete time and continuous time.
Countable state space

Continuous or general state
space

Discrete time


Markov chain on a Markov chain on a measure
countable or finite state state space (for example,
space
Harris chain)

Continuous time

Continuous time Markov Any continuous stochastic
process or Markov jump process with the Markov
process
property

3) Transitions
The changes of state of the system are called transitions. The probabilities
associated with various state changes are called transition probabilities. The process is
characterized by a state space, a transition matrix describing the probabilities of
particular transitions, and an initial state (or initial distribution) across the state space.
By convention, we assume all possible states and transitions have been included in the
definition of the process, so there is always a next state, and the process does not
terminate.
A discrete-time random process involves a system which is in a certain state at
each step, with the state changing randomly between steps. The steps are often thought
of as moments in time, but they can equally well refer to physical distance or any other
discrete measurement. Formally, the steps are the integers or natural numbers, and the
random process is a mapping of these to states. The Markov property states that
the conditional probability distribution for the system at the next step (and in fact at all
64


future steps) depends only on the current state of the system, and not additionally on the

state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with
certainty the state of a Markov chain at a given point in the future. However, the
statistical properties of the system's future can be predicted. In many applications, it is
these statistical properties that are important.
II. ANALYSIS
1) Formal definition
In mathematical terms, the definition can be expressed as follows:
A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ...
with the Markov property, namely that the probability of moving to the next state
depends only on the present state and not on the previous states:
𝑃(𝑋𝑛+1 = 𝑥|𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 , … , 𝑋𝑛 = 𝑥𝑛 ) = 𝑃(𝑋𝑛+1 = 𝑥|𝑋𝑛 = 𝑥𝑛 )
if both conditional probabilities are well defined, that is, if
𝑃(𝑋1 = 𝑥1 , … , 𝑋𝑛 = 𝑥𝑛 ) > 0
The possible values of 𝑋𝑖 form a countable set S called the state space of the chain.
Markov chains are used to compute the probabilities of events occurring by
viewing them as states transitioning into other states, or transitioning into the same state
as before. We can take weather as an example: If we arbitrarily pick probabilities, a
prediction regarding the weather can be the following: If it is a sunny day, there is a
30% probability that the next day will be a rainy day, and a 20% probability that if it is a
rainy day, the day after will be a sunny day. If it is a sunny day, there is therefore a 70%
chance that the next day will be another sunny day, and if today is a rainy day, there is a
80% chance that the next day will be a rainy day as well. This can be summarized in a
transition diagram, where all of the possible transitions of states are described:
To approach this mathematically one views today as the current state, 𝑆0 , which is
a 1 × m vector. The elements of this vector will be the current state of the process. In
our weather example, we define S = [Sunny Rainy]. Where S is called our state space, in
which all the elements are all the possible states that the process can attain. If, for
example, today is a sunny day, then the 𝑆0 vector will be 𝑆0 = [1 0] , because there is
100% chance of a sunny day and zero chance of it being a rainy day. To get to the next


65


state, the transition probability matrix is required, which is just the state transition
probabilities summarized in a matrix. In this case it will be as follows:
𝑆
𝑅
𝑆
𝑅
𝑃 = 𝑆 0.7 0.3 or more generally in this case: 𝑃 = 𝑆 𝛼 1 − 𝛼
𝑅 𝛽 1−𝛽
𝑅 0.2 0.8
To get the next state, 𝑆1 , we simply calculate the matrix product 𝑆1 = 𝑆0 𝑃. Since
calculations for successive states of S is only of the type 𝑆𝑛 = 𝑆𝑛−1 𝑃, the general
formula for computing the probabaility of a process ending up in a certain state is 𝑆𝑛 =
𝑆0 𝑃𝑛 . This allows for great simplicity when calculating the probabilities far into the
future. For example, if today is a sunny day then the state vector 120 day from now,
𝑆120 , is 𝑆120 = [0.4 0.6].
2) Application areas of Markov chains
Since Markov chains can be designed to model many real-world processes, they
are used in a wide variety of situations. These fields range from the mapping of animal
life populations to search-engine algorithms, music composition and speech recognition.
In economics and finance, they are often used to predict macroeconomic situations like
market crashes and cycles between recession and expansion. Other areas of application
include predicting asset and option prices, and calculating credit risks. When
considering a continuous-time financial market Markov chains are used to model the
randomness. The price of an asset, for example, is set by a random factor – a stochastic
discount factor – which is defined using a Markov chain.
3) Application of Markov chains to predict market trends

Markov chains and their respective diagrams can be used to model the
probabilities of certain financial market climates and thus predicting the likelihood of
future market conditions. These conditions, also known as trends, are:


Bull markets: periods of time where prices generally are rising, due to the
actors having optimistic hopes of the future.



Bear markets: periods of time where prices generally are declining, due to the
actors having a pessimistic view of the future.



Stagnant markets: periods of time where the market is characterized by neither a
decline nor rise in general prices.



In fair markets, it is assumed that the market information is distributed equally
among its actors and that prices fluctuate randomly. This means that every actor
has equal access to information such that no actor has an upper hand due to
66


inside-information. Through technical analysis of historical data, certain patterns
can be found as well as their estimated probabilities. For example, consider a
hypothetical market with Markov properties where historical data has given us
the following patterns:


Stagnant
market
0.5
0.25
000
000
050 0.025
000
0.15
000
000
050
000

Bull
market
0.9

0.25
000
000
050
000

0.05
000
000
050
000

Bear
market
0.8

0.07
550
000
After a week characterized of a bull
000 market trend there is a 90% chance that
another bullish week will follow. Additionally,
500 there is a 7.5% chance that the bull week
00 or a 2.5% chance that it will be a stagnant
instead will be followed by a bearish one,
one. After a bearish week there’s an 80% chance that the upcoming week also will be
bearish, and so on. By compiling these probabilities into a table, we get the following
transition matrix M:
Bull
Bear
Stagnant

Bull
0.9
0.15
0.25

Bear
0.075
0.8
0.25


Stagnant
0.025
0.05
0.5

0.9 0.075 0.025
Transition matrix 𝑀 = [0.15
0.8
0.05 ]
0.25 0.25
0.5
We then create a 1 × 3 vector C which contains information about which of the
three different states any current week is in; where column 1 represents a bull week,

67


column 2 a bear week and column 3 a stagnant week. In this example we will choose to
set the current week as bearish, resulting in the vector 𝐶 = [0 1 0].
Given the state of the current week, we can then calculate the possibilities of a
bull, bear or stagnant week for any number of n weeks into the future. This is done by
multiplying the vector C with the matrix, giving the following:

One

week

from

now:


𝐶 ∗ 𝑀1 = [0

1

[0.15 0.8 0.05]

5

weeks

from

now:

𝐶 ∗ 𝑀 5 = [0 1

[0.46 0.45 0.07]

0.9 0.075 0.025 1
0] [0.15
0.8
0.05 ] =
0.25 0.25
0.5

0.9
0] [0.15
0.25


0.075 0.025 5
0.8
0.05 ] =
0.25
0.5

= [0

0.9 0.075 0.025 52
1 0] [0.15
0.8
0.05 ] =
0.25 0.25
0.5

99 weeks from now: 𝐶 ∗ 𝑀 99 = [0

0.9 0.075 0.025 99
1 0] [0.15
0.8
0.05 ] =
0.25 0.25
0.5

52 weeks from now: 𝐶 ∗ 𝑀

52

[0.63 0.31 0.05]


[0.62 0.31 0.05]

From this we can conclude that as n → ∞ , the probabilities will converge to a
steady state, meaning that 63% of all weeks will be bullish, 31% bearish and 5%
stagnant.
What we also see is that the steady-state probabilities of this Markov chain do not
depend upon the initial state. The results can be used in various ways, some examples
are calculating the average time it takes for a bearish period to end, or the risk that a
bullish market turns bearish or stagnant.
III. CONCLUSION
Markov chains are used in a broad variety of academic fields, ranging from
biology to economics. When predicting the value of an asset, Markov chains can be
used to model the randomness. The price is set by a random factor which can be
determined by a Markov chain.
By analyzing the historical data of a market, it is possible to distinguish certain
patterns in its past movements. From these patterns, Markov diagrams can then be
formed and used to predict future market trends as well as the risks associated with
them.

68


REFERENCES
[1] R. G. Gallager, Stochastic processes: theory for applications. United
Kingdom: Cambridge university press, 2013.
[2] R. G. Gallager, “Finite-state markov chains” in Stochastic processes: theory
for applications. United Kingdom: Cambridge university press, 2013, 160-213. [Online]
Avalible: />[3] B. Sericola, Markov Chains: Theory, Algorithms and Applications. London:
ISTE Ltd and John Wiley & Sons Inc, 2013.
[4] Ak-Kuen Siu, Wai-Ki Chin, S. Eric Fung & Michael K. Ng, " On a

multivariate Markov chain model for credit risk measurement," Quantitative Finance,
5:6,
s.543-556,
Feb
2007.
[Online]
Available:
/>[5] Deju Zhang, Xiaomin Zhang, “Study on forecasting the stock market trend
based on stochastic analysis method”, International Journal of Business and
Management, vol 4, nr 6, s. 163-164, June 2009 [Online] Available: CCSE Journals
Online,
/>
69



×