Tải bản đầy đủ (.pdf) (216 trang)

Applications of constrained non parametric smoothing methods in computing financial risk

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.92 MB, 216 trang )

Applications of Constrained Non-parametric Smoothing
Methods in Computing Financial Risk

Chung To (Charles) Wong, BCom (Melb), GradDip (RMIT)

Submit for the degree of Doctor of Philosophy

School of Mathematical Sciences,
Queensland University of Technology,
Brisbane

December, 2008



Abstract
The aim of this thesis is to improve risk measurement estimation by incorporating extra information in the form of constraint into completely non-parametric
smoothing techniques. A similar approach has been applied in empirical likelihood
analysis. The method of constraints incorporates bootstrap resampling techniques,
in particular, biased bootstrap. This thesis brings together formal estimation methods, empirical information use, and computationally intensive methods.
In this thesis, the constraint approach is applied to non-parametric smoothing
estimators to improve the estimation or modelling of risk measures. We consider
estimation of Value-at-Risk, of intraday volatility for market risk, and of recovery
rate densities for credit risk management.
Firstly, we study Value-at-Risk (VaR) and Expected Shortfall (ES) estimation.
VaR and ES estimation are strongly related to quantile estimation. Hence, tail
estimation is of interest in its own right. We employ constrained and unconstrained
kernel density estimators to estimate tail distributions, and we estimate quantiles
from the fitted tail distribution. The constrained kernel density estimator is an
application of the biased bootstrap technique proposed by Hall & Presnell (1998).
The estimator that we use for the constrained kernel estimator is the Harrell-Davis


(H-D) quantile estimator. We calibrate the performance of the constrained and
unconstrained kernel density estimators by estimating tail densities based on samples
from Normal and Student-t distributions. We find a significant improvement in
fitting heavy tail distributions using the constrained kernel estimator, when used in
conjunction with the H-D quantile estimator. We also present an empirical study
demonstrating VaR and ES calculation.
A credit event in financial markets is defined as the event that a party fails to
pay an obligation to another, and credit risk is defined as the measure of uncertainty


of such events. Recovery rate, in the credit risk context, is the rate of recuperation
when a credit event occurs. It is defined as Recovery rate = 1 − LGD, where LGD
is the rate of loss given default. From this point of view, the recovery rate is a key
element both for credit risk management and for pricing credit derivatives. Only the
credit risk management is considered in this thesis. To avoid strong assumptions
about the form of the recovery rate density in current approaches, we propose a
non-parametric technique incorporating a mode constraint, with the adjusted Beta
kernel employed to estimate the recovery density function. An encouraging result for
the constrained Beta kernel estimator is illustrated by a large number of simulations,
as genuine data are very confidential and difficult to obtain.
Modelling high frequency data is a popular topic in contemporary finance. The
intraday volatility patterns of standard indices and market-traded assets have been
well documented in the literature. They show that the volatility patterns reflect
the different characteristics of different stock markets, such as double U-shaped
volatility pattern reported in the Hang Seng Index (HSI). We aim to capture this
intraday volatility pattern using a non-parametric regression model. In particular,
we propose a constrained function approximation technique to formally test the
structure of the pattern and to approximate the location of the anti-mode of the
U-shape. We illustrate this methodology on the HSI as an empirical example.
Keywords: Constraint Method; Expected Shortfall; Non-parametric approach;

Recovery rate density; Intraday Volatility; Risk Management; Value-at-Risk


Contents
1 Introduction

1

1.1

Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Risk management . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2.1

Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.2.2

Recovery rate . . . . . . . . . . . . . . . . . . . . . . . . . . .


4

1.2.3

Intraday Volatility . . . . . . . . . . . . . . . . . . . . . . . .

5

Constraint methods for risk management . . . . . . . . . . . . . . . .

5

1.3.1

Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1.3.2

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.4

Aim of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7


1.5

Structure of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.3

2 Estimation of Value-at-Risk and Expected Shortfall
2.1

2.2

9

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.1.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.1.2

Description of the Problem . . . . . . . . . . . . . . . . . . . . 11

Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2.1

Constrained Kernel Estimator . . . . . . . . . . . . . . . . . . 13

2.3

Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4

Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.1

Densities Investigated

. . . . . . . . . . . . . . . . . . . . . . 22
i


2.4.2

Choice of Kernel Function and Bandwidth . . . . . . . . . . . 22

2.4.3

Measure of Discrepancy: Mean Integrated Squared Error . . . 26

2.4.4

Simulation Result for the Quantile Estimators . . . . . . . . . 29


2.4.5

Convergence of the Constrained Kernel estimator . . . . . . . 40

2.4.6

Test of quantile estimation using CKE . . . . . . . . . . . . . 41

2.5 Expected Shortfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.1

Simulation Results for the ES Estimators . . . . . . . . . . . . 50

2.6 Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.6.1

Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.6.2

Risk factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.6.3

Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

2.6.4

Empricial VaR and ES Estimation . . . . . . . . . . . . . . . 58


2.6.5

Confidence intervals for VaR . . . . . . . . . . . . . . . . . . . 61

2.6.6

Confidence intervals for ES . . . . . . . . . . . . . . . . . . . . 64

2.7 Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.7.1

Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

2.7.2

Backtesting: Result . . . . . . . . . . . . . . . . . . . . . . . . 67

2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3 Estimation of recovery rate density

77

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.1.1

Credit Risk and Recovery Rate . . . . . . . . . . . . . . . . . 77

3.1.2


Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

3.1.3

Aim and structure of this chapter . . . . . . . . . . . . . . . . 80

3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2.1

Overview of methodology . . . . . . . . . . . . . . . . . . . . 81

3.2.2

The Beta kernel estimator . . . . . . . . . . . . . . . . . . . . 81

3.2.3

The constrained Beta kernel estimator . . . . . . . . . . . . . 82
ii


3.2.4

Objective function . . . . . . . . . . . . . . . . . . . . . . . . 82

3.2.5

Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3.2.6


First derivative of the Beta kernel estimator . . . . . . . . . . 86

3.2.7

Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

3.3

Bandwidth selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

3.4

Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

3.5

3.4.1

Mode Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 91

3.4.2

Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . 105

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4 Modelling intraday volatility patterns
4.1


4.2

4.3

120

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.1.1

High frequency volatility . . . . . . . . . . . . . . . . . . . . . 120

4.1.2

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

4.1.3

Aim and structure of this chapter . . . . . . . . . . . . . . . . 122

Regression estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.2.1

Nadaraya-Watson estimator . . . . . . . . . . . . . . . . . . . 123

4.2.2

Distance measure . . . . . . . . . . . . . . . . . . . . . . . . . 123

4.2.3


The U-shape constraint . . . . . . . . . . . . . . . . . . . . . . 124

4.2.4

First derivative of linear estimator . . . . . . . . . . . . . . . . 124

4.2.5

Optimisation for constrained Nadaraya-Watson estimator . . . 125

4.2.6

Bandwidth selection . . . . . . . . . . . . . . . . . . . . . . . 125

Constrained function approximation (CFA) . . . . . . . . . . . . . . . 128
4.3.1

Computational aspects of the CFA . . . . . . . . . . . . . . . 129

4.3.2

The Initialisation procedure . . . . . . . . . . . . . . . . . . . 130

4.3.3

The Adding procedure . . . . . . . . . . . . . . . . . . . . . . 132

4.3.4

The Optimisation procedure . . . . . . . . . . . . . . . . . . . 134


4.3.5

Model diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . 136
iii


4.3.6

Anti-mode estimation . . . . . . . . . . . . . . . . . . . . . . . 141

4.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.4.1

Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . 145

4.5 Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.5.1

Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

4.5.2

Empirical result . . . . . . . . . . . . . . . . . . . . . . . . . . 155

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5 Conclusion

168


A Empirical Study: All Ordinaries Index

172

A.0.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
A.0.2 Risk factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
A.0.3 Value-at-Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
A.0.4 Empricial VaR and ES Estimation . . . . . . . . . . . . . . . 175
A.0.5 Confidence intervals for VaR . . . . . . . . . . . . . . . . . . . 179
A.0.6 Confidence intervals for ES . . . . . . . . . . . . . . . . . . . . 181
A.1 Backtesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
A.1.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
A.1.2 Backtesting: Result . . . . . . . . . . . . . . . . . . . . . . . . 183
B Counter Example

192

iv


Abbreviation
Chapter 2
• CKE:

Constrained Kernel Estimator

• EQ:

Empirical Quantile Estimator


• ES:

Expected Shortfall

• EVT:

Extreme Value Theory

• GPD:

Generalised Pareto Distribution

• HSI:

Hang Seng Index

• ISE:

Integrated Squared Error

• K-L:

Kaigh-Lachenbruch Estimator

• MSP bandwidth:

Maximal Smoothing Principle

• NKE bandwidth:


Na¨ıve Kernel Estimator

• NR bandwidth:
• POT:
• SJ:

Normal Reference bandwidth

peaks-over-threshold
Sheather and Jones’s bandwidth

• VaR:

Value-at-Risk

Chapter 3
• B-kernel:
• CB-kernel:

Beta kernel
Constrained Beta kernel


• EDM:

Empirical Density Mode

• ERM:

Empirical Relationship Mode


• GM:

Grenander Mode

• HSM:

Half Sample Mode

• MSE:

Mean Square Error

• ROT:

Rule of Thumb

• SP:

Semi-Parametric

• SPM:

Standard Parametric Mode

• T-Gauss:

Transformed Gaussian

Chapter 4

• AAE:

Average Absolute Error

• CFA:

Constrained Function Approximation

• LLS:

Linear Least Squares

• LR:
• NW:

Left to Right
Nadaraya-Watson

• PI bandwidth:

Plug-in bandwidths

• RD:

Random

• SSE:

Sum of the Squared Errors



Statement of Originality
This is to certify that the work contained in this thesis has never previously been
submitted for a degree or diploma in any university and that, to the best of my
knowledge and belief, the thesis contains no material previously published or written
by another person except where due reference is made in the thesis itself.

Date

Chung To (Charles) Wong



Acknowledgment
The author thanks Professor Rodney Wolff (Queensland University of Technology,
Australia) for supervising the project throughout three years, and Dr. Steven Li (University of South Australia) for discussions on Monte Carlo simulation.



Chapter 1
Introduction
1.1

Risk

Risk is defined as a potential loss in an uncertain event. The concept of risk plays
an important role in financial markets as risk management is an essential task for
a finanical institution. According to the Basel Accord Committee (1996), the bank
must satisfy the minimum capital requirements of the Basel Accord. The Basel
Accord is a standard bank regulatory policy and regulated by the Basel Committee

on Banking Supervision. The purpose of the regulation is to require the bank to
retain a certain amount of capital to compensate for the corresponding risk exposure.

1.2

Risk management

There are two fundamental measures of risk, namely volatility measures and Valueat-Risk (VaR).
Firstly, volatility measures consider the variation in risk factors. Volatility is
usually measured as the standard deviation of return distribution. A large volatility
suggests that the corresponding asset is subjected to large risk. Intraday volatility
patterns within different stock markets are explored in the literature.
1


Secondly, the VaR measure, which is a cross-sectional measure in practice, considers the extreme loss at a given probability at a given time. Statistically, it is
given by the quantile of the hypothetical return distribution. The definition of VaR
is given in Jorion (2001) and Duffie & Pan (1997).
Therefore, a consistently accurate estimation of the risk measure is an important
task for risk management.
This thesis is comprised of three case studies. We consider estimation of the
Value-at-Risk, of the intraday volatility for market risk, and of the density of recovery rate for the credit risk management. Our approach to risk measure is briefly
introduced in the following Sections. Also, each case study will contain its own
detailed literature review.

1.2.1

Value-at-Risk

Value-at-Risk (VaR) is a fundamental market risk measure. The VaR of a portfolio

(say) measures the possible loss at a given level of probability: usually extreme levels
of probability, such as 2.5% or 1%, are of interest in financial markets. In other
words, estimation of VaR is, in principle, based on a small number of observations
lying in the tail of a portfolio’s return distribution. This makes it difficult to achieve
consistently accurate estimation of the desired extreme quantiles (or percentiles), or
of the tail distribution itself.

Current Approaches to Computing Value-at-Risk
Empirical Quantile. This approach is based on a fixed period of historical data.
First of all, changes in well-defined risk factors are calculated from these data,
and then an hypothetical portfolio value is computed. This generates the
hypothetical return directly. Repeated application of this, by incorporating
variation (such as ‘tweaking’ the data by jittering, or using different historical
2


subsets) can then create an hypothetical empirical distribution of returns. VaR
at level α is then given by the (n × α)th ordered hypothetical return, which is
the empiricial percentile at level α and n is the sample size.

Problem with Empirical Quantile. The empirical quantile is very sensitive to the
sample size. For instance, the empirical quantile at 0.01 level with 1000 observations depends on the smallest 10 observations.

Monte Carlo Simulation. This approach assumes an hypothetical distribution of
risk factor, and then generates a large number of random variables from this
distribution. Using these random risk factors, hypothetical returns are computed. Then, a distribution of hypothetical returns is obtained, and VaR at
level α is given by the (n × α)th ordered hypothetical return, where n is the
sample size.

Normal-based Methods. This approach is based on the assumption that the risk

factor distribution is Normally distributed with mean zero1 and standard deviation of risk factors. VaR is simply the inverse cumulative density function
at level α.

Problems with Monte Carlo Simulation and Normal-based Methods. The Monte
Carlo simulation and Normal-based methods assume that the risk factors belong to some assumed distribution and a Normal distribution, respectively.
Such assumptions restrict the flexibility and shape of the densities of the risk
factors. Further the risk factor density has a heavier tail than the Normal
distribution.
1

The mean is ignored since it is very small relative to σ∆t

3


1.2.2

Recovery rate

Recovery rate is the rate of recuperation when a default event2 occurs. It is a
measure of the extent to which the size of loss is minimised when a default event
occurs: a high recovery rate can reduce loss. For this reason, recovery rate is a key
factor in the estimation of Credit Value-at-Risk(CVaR), which is the potential loss
in a credit default event. However, in some cases, the recovery rate is not taken into
account by CVaR; indeed, sometimes the recovery rate is assumed to be 0%, which
means that nothing is retrieved when a default event occurs. This will induce an
overestimated CVaR. There is clearly a need for better estimation of recovery rates
to improve calculation of CVaR.

Current Practical Approach to Recovery Rate Densities

Recovery rate, in the credit risk context, is the rate of retrieval when a default
event occurs. It is defined as Recovery rate = 1 − LGD, where LGD is the rate
of loss given default. The current practical approach to estimating the recovery
rate is based on parametric methods. Specifically, the recovery rate distribution
is usually assumed to be a Beta distribution, and its parameters are calibrated by
the method-of-moments (by equating the respective sample moment to those of the
Beta distribution, and solving the resultant set of equations). Also, the recovery
rate is usually categorised by seniority and by industry, because recovery rates are
related to the value of collateral and the liquidity of the company.
Problems for the recovery rate with an assumed distribution. There is no statistical or theoretical evidence to show that the recovery rate density necessarily
belongs to a Beta family distribution. Also, the literature reports a bimodal density
of recovery rates, which is not supported by the Beta distribution.
2

A default event is a credit event that a party fails to pay an obligation to another.

4


1.2.3

Intraday Volatility

Volatility is one of the primary measures of risk. It measures the change in return
in a given period. As technology improves, accessibility of high frequency data is
more convenient and easier. This enhances the potential for high frequency data
modelling. Stock market and stock indices are general cases which render high
frequency data, attracting many researchers to investigate the daily periodic pattern.

Capturing the intraday volatility pattern

Nowadays, numerous ways exists to access intraday data. The data can be used
to measure volatility within a day and the literature explores the non-linearity in
intraday stock index volatility in financial markets. It is known that the share return
volatility pattern changes throughout the day, with large volatility at the start of
the day, slumping at around lunch time, then building up in a step-wise pattern
throughout the afternoon to the end of the day.

1.3
1.3.1

Constraint methods for risk management
Constraint

Applying a constraint can be understood as placing a restriction upon a model
in order to ‘encourage’ a particular behaviour. Such constraints are usually based
on some property of the underlying distribution, and are accessed by means of a
statistic. For instance, a specific quantile of a return distribution can be used to
improve the tail estimation for VaR, or the mode of a density can be used to improve
density estimation of recovery rate in credit risk modelling. Our approach will be
to estimate the form of the constraint using the sample, and then to use in the
appropriate statistic in the estimation technique.
5


1.3.2

Background

The constraint approach is widely applied in non-parametric frameworks. The
empirical likelihood (Owen (2001)) is the non-parametric approach incorporating

constraints. Chen (1997) proposes estimating the density by using an empirical
likelihood-based kernel estimator when extra information is available. In this thesis,
we assume that the expectation of a transformed random variable is zero, which is
given by E{g(X)} = 0, where g(X) is a known function as in Chen (1997). This
constraint can improve the estimation of a density function by a significant reduction
of the Mean Integrated Squared Error3 .
Hall & Presnell (1998), Hall & Presnell (1999a) and Hall & Presnell (1999b)
propose biased bootstrap methods. This method incorporates extra information
with the standard bootstrap resampling method of Efron & Tibshirani (1993). The
biased bootstrap method assigns sampling weights to each observation, xj , given by
P (X ∗ = xj |χ) = pj ,
where X ∗ is the resampled observation, χ is the original sample, and pj is the sampling weight. In these papers, they also mention that one application of the biased
bootstrap is a kernel estimator4 under constraint. Hall, Huang, Gifford & Gijbels
(2001) and Hall & Huang (2001) successfully apply the biased bootstrap method to
use kernel regression to estimate the hazard rate under assumptions of monotonicity. Hall & Turlach (1999) apply the biased bootstrap method to curve estimation
3

Mean Integrated Squared Error =
=
=

E

(f − fˆ)2 dx
{f − E(fˆ)}2 dx +

{E(fˆ2 ) − {E(fˆ)}2 }dx

Integrated Squared Bias + Integrated Variance


Assuming integration and expectation can be interchange.
4
Kernel estimator here refers to the kernel density estimator and kernel regression model.

6


by minimising the empirical mean integrated squared error and by assuming unimodality. Cheng, Gasser & Hall (1999) apply the biased bootstrap method to curve
estimation under both assumptions of unimodality and monotonicity. Also, a unimodality assumption is applied to density estimation in Hall & Huang (2002), who
provide some theoretical properties of the choice of the distance measure for unimodality assumption with the density estimation.

1.4

Aim of this thesis

Estimation of risk measures is divided into two principal approaches: parametric and
non-parametric methods. Parametric methods are based on strong assumptions
about distributions, and non-parametric methods are distribution-free. As nonparametric methods usually accept that there is a fixed but unknown underlying
distribution characteristics of the distribution are, in some sense, ignored.
Our main interest in this thesis is to improve non-parametric methods for modelling and estimation in a selection of problems in finance by incorporating key
distributional characteristics into non-parametric estimation.

1.5

Structure of this thesis

In Chapter 2, we introduce a technique to improve tail estimation. The weighted
kernel density estimator is combined with a quantile estimator to highlight the
character of tail density. Then the Value-at-Risk and the Expected Shortfall can be
computed directly from this estimator. Both simulations and empirical results are

presented with comparison of the existing approaches in this chapter.
In Chapter 3, we apply the constraint method to the Beta kernel density estimator. As a number of papers give empirical evidence of the recovery rate density5 , this
5

The recovery rate density is a bimodal density function.

7


empirical evidence is imposed as a constraint on the Beta kernel estimator. Then
the preformance of the density estimation is shown by the simulation.
In Chapter 4, we further adopt the constraint method to the Nadaraya-Watson
regression with intraday volatility. We impose the intraday volatility pattern, such
as double U-shaped pattern, as a constraint in the Nadaraya-Watson regression
estimator. Also, we develop a non-paramtric technique to approximate the shape
of the function. From this technique, we can analyse the shape of the function.
Furthermore, these techniques are applied to the Hang Seng Index as an empirical
study.
Chapter 5 concludes the thesis.

8


Chapter 2
Estimation of Value-at-Risk and
Expected Shortfall
2.1
2.1.1

Introduction

Background

Value-at-Risk (VaR) is widely used as a risk measurement tool in financial markets,
and it has become a very important tool in the financial world more widely. VaR
measures potential loss at a given probability. Throughout this chapter, we define
loss to be negative, so that VaR is located in the left tail. In a statistical sense,
estimating VaR is equivalent to estimating a quantile or percentile of a distribution.
Jorion (2001) and Duffie & Pan (1997) give an overview of some current approaches
to calculating VaR.
For this reason, many researchers, such as Harrell & Davis (1982), Huang &
Brill (1999) and Huang (2001) consider estimating quantiles. Furthermore, the
performance of quantile estimators is an interesting topic in its own right. Parrish
(1990) and Dielman, Lowry & Pfaffenberger (1994) compare the performance of
several quantile estimators in order to find out which performs the best against
9


specified criteria. They conclude that the Harrell and Davis (H-D) estimator1 is the
best for a wide range of quantile estimation scenarios, as demonstrated in simulation
results from both symmetric and skewed distributions. Since all these estimators
are distribution-free, they are part of a non-parametric framework.
One of the alternative risk measurement methods is to consider the tail density,
called the Expected Shortfall (ES). It is defined as the expected loss given that
the loss exceeds the VaR. In other words, ES is the conditional expectation of the
returns given that the returns are beyond the corresponding VaR. Therefore, the
ES is also known as the Conditional VaR. Artzner, Delbaen, Eber & Heath (1999)
propose four axioms that any risk measure (ρ) should satisfy, namely
• Translation Invariance: If X is the risk factor and a is real, then
ρ(X + a) = ρ(X) − a.
• Subadditivity: If X1 , X2 are the risk factors, then

ρ(X1 + X2 ) ≤ ρ(X1 ) + ρ(X2 ).
• Monotonicity: If X1 ≤ X2 are the risk factors, then
ρ(X1 ) ≤ ρ(X2 ).
• Positive homogeneity: If X is the risk factor and a is postive real number,
then
ρ(aX) = aρ(X).
.
They show that the VaR is not a coherent risk measure, that is, that it does not
satisify their proposed axioms. Acerbi & Tasche (2002) show that the ES is a
1

More detail about H-D estimator will be discussed in Section 2.2.

10


coherent risk measure in terms of these four axioms. The mathematical properties
of ES are studied in Bertsimas, Lauprete & Samarov (2002), Rockafellar & Uryasev
(2000) and Tasche (2000)
VaR estimation is interesting, but so is estimation of entire tail distributions to
compute ES, as the ES is the first moment of the density over the partition below
the corresponding VaR. An accurate estimator of the tail of a density function
carries accurate information both about extreme quantiles or percentiles, and about
conditional expectation. From this point of view, much research has been done
using parametric methods, in which a specified distribution is fitted to the observed
returns by calibrating the parameters. This method is, of course, very sensitive to
the assumption of distribution. For instance, based on various simulation results
in Chang, Hung & Wu (2003) comparing empirical coverage, the average length of
confidence intervals and the variance of the length of confidence intervals, the H-D
estimator is preferred over parametric models in estimating the VaR.


2.1.2

Description of the Problem

There are several popular ways to estimate VaR based on a series of financial returns.
One of them is to use the parametric approach. The parametric approach assumes
that returns follow a specified distribution (e.g., Normal distribution). Although this
approach may be able to exploit properties of the specified distribution, it involves
calibrating parameters. Also, the shape of the density lacks flexibility due to having
a small number of parameters for many popular models, such as the Normal and
Student-t distribution. One consideration here is that evidence of risky behaviour
in the data set (outliers) may adversely affect parameter estimation, even when a
robust method is used.
There are two popular methods to estimate model parameters: Method-ofMoments and Maximum Likelihood. In the Method-of-Moments approach, the
11


×