Tải bản đầy đủ (.pdf) (305 trang)

Liquidity modelling by robert fiedler

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.43 MB, 305 trang )

By Robert Fiedler

The market turmoil that began in mid-2007
re-emphasised the importance of liquidity to
the functioning of financial markets and the
banking sector. In advance of the turmoil,
asset markets were buoyant and funding was
readily available at low cost. The reversal in
market conditions illustrated how quickly
liquidity can evaporate and that illiquidity
can last for an extended period of time.
Financial regulators across the globe are
urging institutions to address this dimension
of financial risk more comprehensively.

‘Robert Fiedler is one of a handful of
thought leaders in the field of liquidity risk
management at financial institutions. He is
also one of the most experienced observers
and contributors. No one is better qualified
than Robert to address the topics in this book.’
Leonard Matz, Liquidity Risk Advisers

Liquidity Modelling

Liquidity risk is hard to understand. It needs
to be broken down into its components and
drivers in order to manage and model it
successfully.

Liquidity


Modelling
by Robert Fiedler

In this comprehensive guide to modelling
liquidity risk, Robert Fiedler’s practical
approach equips the reader with the tools to
understand the components of illiquidity risk,
how they interact and, as a result, to build a
quantitative model to display, measure and
limit risk.
Liquidity Modelling is required reading
for financial market practitioners
who are dealing with liquidity risk
and who want to understand it.

PEFC Certified
This book has been
produced entirely from
sustainable papers that
are accredited as PEFC
compliant.
www.pefc.org

liquidity modellingCS4.indd 1

15/11/2011 10:39






“fiedler_reprint” — 2012/8/10 — 13:44 — page i — #1





Liquidity Modelling












“fiedler_reprint” — 2012/8/10 — 13:44 — page ii — #2

















“fiedler_reprint” — 2012/8/10 — 13:44 — page iii — #3





Liquidity Modelling

by Robert Fiedler












“fiedler_reprint” — 2012/8/10 — 13:44 — page iv — #4






Published by Risk Books, a Division of Incisive Media Investments Ltd
Incisive Media
32–34 Broadwick Street
London W1A 2HG
Tel: +44(0) 20 7316 9000
E-mail:
Sites: www.riskbooks.com
www.incisivemedia.com
© 2011 Incisive Media
ISBN 978-1-906348-46-5
Reprinted with corrections 2012
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library

Publisher: Nick Carver
Commissioning Editor: Sarah Hastings
Managing Editor: Lewis O’Sullivan
Designer: Lisa Ling
Copy-edited and typeset by T&T Productions Ltd, London
Printed and bound in the UK by Berforts Group
Conditions of sale
All rights reserved. No part of this publication may be reproduced in any material form whether
by photocopying or storing in any medium by electronic means whether or not transiently
or incidentally to some other use for this publication without the prior written consent of
the copyright owner except in accordance with the provisions of the Copyright, Designs and
Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency

Limited of Saffron House, 6–10 Kirby Street, London EC1N 8TS, UK.
Warning: the doing of any unauthorised act in relation to this work may result in both civil
and criminal liability.
Every effort has been made to ensure the accuracy of the text at the time of publication, this
includes efforts to contact each author to ensure the accuracy of their details at publication
is correct. However, no responsibility for loss occasioned to any person acting or refraining
from acting as a result of the material contained in this publication will be accepted by the
copyright owner, the editor, the authors or Incisive Media.
Many of the product names contained in this publication are registered trade marks, and Risk
Books has made every effort to print them with the capitalisation and punctuation used by the
trademark owner. For reasons of textual clarity, it is not our house style to use symbols such
as TM, ®, etc. However, the absence of such symbols should not be taken to indicate absence
of trademark protection; anyone wishing to use product names in the public domain should
first clear such use with the product owner.
While best efforts have been intended for the preparation of this book, neither the publisher,
the editor nor any of the potentially implicitly affiliated organisations accept responsibility
for any errors, mistakes and or omissions it may provide or for any losses howsoever arising
from or in reliance upon its information, meanings and interpretations by any parties.













“fiedler_reprint” — 2012/8/10 — 13:44 — page v — #5





I dedicate this book to my father, Edvard Fiedler
1924–2005












“fiedler_reprint” — 2012/8/10 — 13:44 — page vi — #6

















“fiedler_reprint” — 2012/8/10 — 13:44 — page vii — #7





Contents
About the Author

ix

Preface

xi

1

Introduction

2

Setting the Scene: Why Liquidity Is Important in a Bank


11

3

What Is Liquidity Risk?

19

4

Illiquidity Risk: The Foundations of Modelling

35

5

Capturing Uncertainties

53

6

A Template for an Illiquidity Risk Solution

79

7

The Counterbalancing Capacity


117

8

Intra-Day Liquidity Risk

167

9

Liquidity Transfer Pricing and Limits

219

10 The Basel III Banking Regulation
Index

1

249
281

vii













“fiedler_reprint” — 2012/8/10 — 13:44 — page viii — #8
















“fiedler_reprint” — 2012/8/10 — 13:44 — page ix — #9





About the Author
Robert Fiedler owns and runs Liquidity Risk Corp, which consults

on methodology and processes and also builds prototypes and IT
solutions for liquidity risk.
In the first half of his career, Robert spent over a decade in the
treasury/dealing rooms of numerous international major banks as
a money market liquidity manager, trading interest rate products
and derivatives. Later he switched to risk management and developed Deutsche Bank Group’s liquidity risk methodology, on which
he successfully built a global system (LiMA) which measures and
limits the bank’s funding liquidity. Moving into software development, Robert became country coordinator for Germany and executive director of Asset Liability Management (ALM) and Liquidity
Risk Solutions at Algorithmics Inc, Toronto. Subsequently, he joined
the board of Fernbach Software, Luxembourg, where he oversaw the
development of ALM, performance measurement, IFRS and liquidity risk software. During this time he constantly developed liquidity
methodologies and taught his research results. Jointly with the University of St Gallen, Switzerland, he developed a stochastic model
that optimises the risk and return of investing non-maturing assets
and liabilities.

ix












“fiedler_reprint” — 2012/8/10 — 13:44 — page x — #10

















“fiedler_reprint” — 2012/8/10 — 13:44 — page xi — #11





Preface
This is a second impression of the first edition, which has been
revised to amend errors in mathematical notation and data tables.
PREFACE TO THE FIRST EDITION
After starting in a bank’s money market and payment department in
the 1990s, I had drifted to asset/liability management, then to trading and finally to market risk. In the summer of 1997 I was asked to
develop “a short methodology” for a system that should limit the
consumption of the bank’s cash liquidity by the different business
lines. Searching for more information within the bank and in the literature of the time, my first finding was that there was no consensus

in the usage of the expression “liquidity”: sometimes it described the
saleability of instruments, while elsewhere it described the bank’s
daily cash position, the supply of central bank funds in a market, the
bank’s ability to acquire cash and, indeed, had various other meanings. My first action was to invent and market clumsy naming conventions like “Expected Future Liquidity” (we would now call this
the “forward liquidity exposure” (FLE)), which enabled the different
liquidity (risk) types to be distinguished and thus for practitioners
to agree on what type of liquidity was relevant for the purpose.
The next step was for me to separate the liquidity forecast at the
start of the day from the forecast at the end of the day (which includes
all liquidity management effects the bank carries out during its daily
business). As the results were disappointing (the bank would most
likely simply square possible deficits or surpluses of the morning
forecast), I described a thought experiment in which the bank tries
to generate as much liquidity as it can (whether it needs the liquidity in the given situation or not), known as the “counterbalancing
capacity” (CBC). Obviously, the FLE reflects what can happen to the
bank, whereas the CBC describes what the bank could do against a
potential shortfall. This work took place around 1998, and although
the idea is now widely accepted (eg, in the high-quality-liquid-assets
concept of Basel III), it took many years to convince the risk management world that it is not capital (as is the case for market, credit and
operational risk) that should be the liquidity risk buffer, but CBC.
xi













“fiedler_reprint” — 2012/8/10 — 13:44 — page xii — #12





LIQUIDITY MODELLING

At that time, value-at-risk (VaR) techniques had been commonly
accepted for market risk, and comparable credit VaR concepts were
emerging. It was widely expected that similar deviations from the
original VaR concept, liquidity-at-risk (LaR), had to be the next
logical step. After taking up the basic LaR ideas (we would now
call it “cashflow-at-risk”) and transforming them into the “expected
LaR” concept, I had to realise that they do not give any of the desired
answers like “how big are the expected losses?”, like market-VaR
does, but instead reflect merely the model risk of the cash-forecasting
procedure. The logical consequence seemed to be to reflect the value
risk aspect and develop a “value-liquidity-at-risk” concept, but,
doing this, I felt that the approach was going into the wrong direction (not least because the acronyms were getting too complicated)
and tried to exploit other aspects of the VaR concept.
Obviously the liquidity forecast is predominantly driven by future
cashflows which bear uncertainty. In market risk approaches like
Monte Carlo techniques, the cashflows are categorised (eg, into
‘fixed’, ‘floating’ and ‘optional’) and then accordingly simulated.
A similar, more rigorous classification seemed to be the right way
forward for liquidity risk simulations. In hindsight it was the wrong

way: more complex simulations (eg, for credit risk events) cannot be
worked out at the level of cashflows, but instead need to be done at
the level of transactions.
Another problem became apparent at that time: in market and
credit risk, portfolios that comprise existing transactions of the
bank are investigated, but new, as-yet non-existent transactions are
ignored. In liquidity risk, however, not only we are interested in
what happens to the part of the balance that already existed when
the forecast was made, but we need to model hypothetical transactions as well. When hypothetical transactions were introduced (with
the CBC, which is created by hypothetical cashflows), they were not
to the liking of the risk community, which at that time was heavily focused on market and credit risk. They were largely rejected as
“unforecastable”, or even speculative; the situation only changed
when asset/liability managers became interested in the underlying
balance-sheet simulation.
Another legacy from the risk management practices at this time
was the idea that the basis of all calculations should be one “most
likely” scenario. On the other hand, it was evident that, at least in
xii













“fiedler_reprint” — 2012/8/10 — 13:44 — page xiii — #13





PREFACE

theory, doom could strike at any time and make all forecasts that
have been made on a going-concern basis obsolete. The solution
was to introduce more pessimistic VaR measures like conditional
VaR, as well as to consider extreme scenarios (sometimes called
stress tests). Although this proceeding is not wrong in principle,
it obscured for some time the realisation that all scenarios, including
the “most likely” ones, are only theoretical, fragile constructions.
For a long time it was impossible to convince risk managers and
senior executives to make risk forecasts scenario dependent. If multiple scenarios were considered, the outcomes of the most likely scenarios were weighted with a high probability, whereas the extreme
scenarios were weighted with extremely low probabilities. Unsurprisingly, the overall weighting of the scenario results was within
boundaries the bank could digest, but only if the weighting of the
extreme scenarios was low enough. Failure was not an option.
Another obstruction of the development was the predominant
view in which liquidity risk is always initiated by market, credit
and operational risk realisations and thus is simply a consequence
of other risks. At that time, however, it was the prevailing view that
led to a focus on questions like “How can the results of credit risk
modelling be transformed into liquidity results?”. After 2008 it was
clear that liquidity risk could strike without an actual loss occurring,
triggered only by the fear that such a loss could potentially occur:
why did Société Générale survive and Fortis go under?
After the Y2K hysterics1 had calmed down, almost nothing happened in liquidity risk development. Most banks were still concerned with operational risk concepts, because they are part of the

Basel II framework and these banks were, on the practical side,
too busy to implement Basel II itself. When on September 11, 2001,
the (financial) world was reminded of its vulnerability, there was a
light hiccup, but after the central banks had successfully calmed the
payment markets it was business as usual again.
In the following years liquidity risk was not a hot issue. Insiders
expected that capital and unresolved credit issues (but not liquidity
risk) would be on the agenda of the expected Basel III regulation.
Then, in 2007 the financial crisis hit the banks.
For most banks it was too late to do something about their liquidity
measurement systems before the peak of the crisis in late 2008. After
that, almost all banks started liquidity risk initiatives, but, as always,
xiii












“fiedler_reprint” — 2012/8/10 — 13:44 — page xiv — #14






LIQUIDITY MODELLING

those that would need these initiatives most were too busy with “fire
fighting”, and the others were waiting for the new Basel III rules to
be finalised. The summer of 2011 saw the banking market in panic
mode again and fighting just to survive.
OUTLOOK
The liquidity framework of Basel III has changed a lot since its initial proposal. Specialists at the regulatory bodies have started to
get to grips with liquidity risk concepts. Many banks have started
on actual projects to comply with the regulations. Risk managers
within the banks are realigning to liquidity risk, dragging academics
with them. Software vendors and consulting companies have not
yet taken many concrete steps, but they have identified the Basel III
regulation and economic liquidity risk as a target market.
The issue is hot. Let us all hope that the financial markets will
not collapse and thus give us the time to further develop these
fascinating conceptions.
ACKNOWLEDGEMENTS
I would like to thank the fellow enthusiasts with whom I had inspiring conversations, namely Werner d’Haese and Jean-Marcel Phefunchal. I acknowledge the contribution of Darren Brooke and thank
him for his persuasion and guidance in penetrating my somewhat
nebulous ideas until we were able to transform these concepts into
a piece of working software. I also thank Matthias Küstner for his
relentless but still constructive feedback on every concept uttered
in the manuscript. He contributed many thoughts, especially on the
last four chapters, and without him this book would have probably
not been finished.
I acknowledge with gratitude the editorial assistance of Sarah
Hastings and Lewis O’Sullivan of Risk Books.
Lastly, I must acknowledge my wife, Claudia, for her endless

patience during the writing of the book.
1

See />
xiv












“fiedler_reprint” — 2012/8/10 — 13:44 — page 1 — #15





1

Introduction
WHAT IS THIS BOOK ABOUT?
Liquidity risk is hard to understand. It needs to be broken down
into its components and drivers in order to manage and model it
successfully.

It is important to discern the specific risk of a bank to not be
able to execute contractually agreed payments from the general concept of liquidity risk, which also comprises the liquidity of financial
products, markets, exchanges, etc.
This book is about the risk of banks losing their ability to stay
liquid, or to become illiquid, which we will call illiquidity risk. The
term “liquidity risk” describes a status which can be understood
straightforwardly (“liquid”) and is then endangered (“risk”); the
latter does not require an explicit definition of the perils (subsiding
funding base, declining saleability of assets, default of loans, etc,
which would be necessary if we were to talk about liquidity risk
instead of illiquidity risk).
ILLIQUIDITY RISK: A RISK TYPE OF ITS OWN?
Many financial professionals regard illiquidity risk not as a risk in
itself but only as a “consequential risk” which comes into play after
traditional risks have materialised in a loss: if this loss is too big (and
becomes public), potential depositors start to doubt the bank’s financial soundness and ask for higher funding spreads or even refrain
from making a deposit, while existing depositors might withdraw
their funds. Subsequently, the bank’s profitability suffers from the
increased refinancing cost and in the worst case the bank ends up
insolvent or illiquid.
“First you lose the money, and then you go bust”: this view is not
wrong but incomplete. Illiquidity risk is more than a consequence of
losses. Banks can run into liquidity problems simply by having an
unfavourable relationship between assets and liabilities over time.
1













“fiedler_reprint” — 2012/8/10 — 13:44 — page 2 — #16





LIQUIDITY MODELLING

Let us look at banking history: in the summer of 2007 the prices of
so-called sub-prime securities fell dramatically. As a consequence,
funding spreads peaked for banks with large exposure in sub-prime
assets, and banks like IKB and Sachsen LB in Germany became
unable to acquire sufficient funds and finally had to be taken over
by larger banks. When funding spreads for banks with weak credit
ratings rallied, more and more banks tightened their policies on lending to other banks. In in the spring of 2008, however, bankers talked
wishfully at liquidity conferences about markets returning to “normality” with low funding spreads and fathomless liquidity. When,
contrary to expectation, funding spreads rose during the summer
break, most European governments felt forced to placate clients by
guaranteeing their retail deposits with banks. In autumn 2008 some
European banks could only survive with state guarantees or were
taken over by other banks.
At the beginning of the sub-prime crisis, the liquidity risks developed as described in the textbooks for consequential risk. The
realised or potential losses from the sub-prime securities triggered

hindered funding and higher costs. In in the summer of 2008 the
crisis peaked and almost every bank was put under general suspicion and the unsecured lending between banks almost dried up. But
why did some banks go under, and why and how did others survive? The answer is not straightforward, as there are factors which
are extrinsic from the bank’s point of view, like the grading they get
from rating agencies or the willingness and ability of their government to rescue them. The intrinsic factors, however, are less opaque.
First, banks that had funded their assets with liabilities of equivalent duration were far less obliged to acquire refinancing in the
markets, independently of the quality of their assets. Second, banks
with sufficiently large portfolios of unencumbered liquid assets were
able to generate sufficient secured funding independently of their
credit standing. Thirdly, only banks with short-term liabilities not
matching their long-term assets (which themselves cannot be used
for secured funding) had to submit to the potential debtor’s decision
whether or not to give them money.
Illiquidity risk is a risk type of its own. It can result from other risks
or be intertwined with them. Market risk, for example, can materialise as a loss stemming from the illiquidity of assets (insufficient
saleability at the “right” price), but a bank’s inability to sufficiently
2












“fiedler_reprint” — 2012/8/10 — 13:44 — page 3 — #17






INTRODUCTION

fund itself can lead to fire sales and thus to the materialisation of
market risk.
MEASURING ILLIQUIDITY RISK: WHAT IS THE PROBLEM?
Banks’ risk management practices have undergone fundamental
changes in the two decades since the 1990s. The continuous development of advanced statistical techniques has improved the quantification of risks. Market and credit risks are defined as potentially
detrimental effects resulting from the uncertainty in determining
the value of portfolios. Inappropriate handling of a bank’s business might worsen the situation (operational risk). Consequently,
these risks are expressed as distributions of losses (the potential
value minus the current value) which are derived by historical statistical inference. Finally, the complexity is reduced by using specific
moments of these distributions, known as value-at-risk (VaR).
Risk professionals attempted to apply these techniques to illiquidity risk and model it in a way that the result is one number,
liquidity-at-risk (LaR). Unfortunately, these concepts turned out to
be unusable in the 2008 crisis and have since almost disappeared
from public discussion among risk professionals. There are two main
reasons for the inadequacy of this approach:
1. illiquidity risk can only inadequately be expressed as value
risk;
2. historical statistical inference is problematic because illiquidity
risk emerges only in situations when behaviours and markets
which have been stable for long periods suddenly leap.
The historical risk models failed during the global financial crisis.
Why? Because these models were designed to answer the question
“what could go wrong in the future?”

These models therefore focus on the market variables that have
fluctuated in the past and tend to filter out variables which have so
far been inconspicuous. In particular, if financial markets have been
working properly for a long time, we cannot know which variables
will remain steady, even in situations of stress, and which might,
possibly together with others, change dramatically. Unfortunately,
we do not have the cognitive faculty to discriminate which historically stable variables will become unstable and which will remain
stable.
3












“fiedler_reprint” — 2012/8/10 — 13:44 — page 4 — #18





LIQUIDITY MODELLING

To ensure the inclusion of risks which come to the fore as a result

of debatable assumptions, it would be better to ask “which market
conditions must prevail to allow a transaction to mature as scheduled?” Or, more broadly in the context of this book, “what assumptions must be made to allow a bank to develop its balance sheet as
planned?”
In the past too many banks made the fatal assumption that if
everything stayed as it was – liquid, with assets and liabilities being
traded at their “fair” value and the bank never needing to violate its
own business model and unwind financial transactions because it
needs to free cash – all would go smoothly, the sophisticated banks
would generate fantastic returns by inventing complex derivatives
and even the less sophisticated banks would make good money, so
why bother?
However, it did not go at all smoothly in the summer of 2007, so
now we have to “bother” and examine why.
COMPARING THE MEASUREMENT OF LIQUIDITY AND OTHER
RISKS
We shall measure illiquidity risk a little differently from how we
measure “normal” risks: market, credit or operational risks are loss
risks (or better, value risks) that deal with expected losses which are
captured by directly estimating a detrimental event’s probability
of occurrence, or sometimes by assessing the worst event which is
likely to happen with a certain probability.
For this purpose, the value of a portfolio is constructed as a function of market and counterparty variables. Any realisation of such a
variable results in a corresponding value that can be seen as a loss
or profit compared with the current value. As not all possible market variables will occur with identical probability, probability distributions need to be estimated, usually by projecting historically
observed distributions into the future (and often “calibrating” them
to fit risk neutrality). “Inserting”these distributions into the value
function gives the sought value distribution. From this we can derive
typical results, such as
• the probability of losing more than €1 million is X,
• the probability of this client defaulting is Y,


4












“fiedler_reprint” — 2012/8/10 — 13:44 — page 5 — #19





INTRODUCTION

• given an error probability of 0.5%, the maximum loss in this

portfolio will not exceed €Z.
In illiquidity risk, however, we will not be able to give comparable
answers like “The probability of the bank becoming insolvent is X”.
The reasons are manifold, and include the following:
• providing such a probabilistic answer would require the esti-


mation of an overwhelming number of stochastic variables
together with their interactions;
• many variables, eg, a counterparty’s willingness to supply

additional funding, cannot be deduced from statistics and
expressed in probability distributions;
• in illiquidity risk, we are particularly interested in changes

of market regimes and would therefore need to explicitly
model erratic changes in the probability distributions (other
than in market/credit risk, where we normally assume the
distributions to be invariant over time);
• the impact of illiquidity risk is almost binary: a cash shortfall

as crystallisation of illiquidity risk is not detrimental as long as
the bank can compensate it with additional borrowing; if the
shortfall, however, reaches the tipping point and becomes too
large to be offset, the damage is huge, whereas even a small loss
stemming from the crystallisation of market risk is detrimental.
For market and credit risk the detrimental events themselves are
less interesting, as an event’s impact is measured in terms of potential losses (or at least missed profit opportunity). For operational
risk, however, it is less straightforward to express harmful developments in losses. Problems regarding a bank’s reputation, for example, are clearly “damaging” in themselves, but the effects might be
very difficult, if not impossible, to quantify, even after they have
occurred.
For illiquidity risk, liquidity events can even result in profits,
although they are undoubtedly of a “generic” disadvantageous
nature. It is undeniably a detrimental effect if, for example, a counterparty misses a payment to the bank; but if the bank has at that
time an excess of liquidity, the missed payment reduces the bank’s
risk of earning only a lower than normal interest rate. If, however,
5













“fiedler_reprint” — 2012/8/10 — 13:44 — page 6 — #20





LIQUIDITY MODELLING

the bank has short liquidity, the non-payment could stir a serious liquidity problem, potentially including negative profit and loss (P&L)
effects and even illiquidity.
WHAT IS COVERED BY THIS BOOK?
Liquidity, or liquidity risk measurement and management, is a wide
field, which so far is only scarcely covered with publicly revealed
concepts. The most forthright approach has been to apply statistic
methods (VaR) stemming from the concepts developed since 2000 for
measuring value risks like market and credit risk in order to cover
liquidity risk. Although many banks hoped for the development
of such an LaR concept, which covers their complete liquidity risk

measurement, it turned out that these concepts can only be applied
to liquidity-induced value risks that stem from increasing funding
costs or from decreasing saleability of financial assets.
Here, instead, our focus is on illiquidity risk: the risk of a bank
becoming unable to satisfy its contractual obligations. The forward
liquidity exposure, which is the forecasted balance the bank has with
its central bank (“nostro”), is used as an indicator of its future cash
liquidity situation. The concept is kept flexible enough to produce
different possible versions of the future (scenarios) instead of trying
to approximate a conjectured single version of the truth. As capital
cannot be used as a buffer for illiquidity risk, the bank’s ability to
generate new inflows (the “counterbalancing capacity”) is estimated
using a specific strategy for offsetting potential liquidity squeezes.
In Chapter 2 we explain how a bank’s economic success is measured and steered and why liquidity is an essential prerequisite for
the functioning of this approach. The role of capital for other risk
types is explained, and we establish why capital cannot be used
as buffer for liquidity risk and why the counterbalancing capacity
needs to be introduced.
Chapter 3 defines what is understood as liquidity risk in this book
and separates the different types of liquidity risk from each other.
The suitability of established statistical measurement techniques for
different liquidity risk types is analysed, with the conclusion that
none really fit for illiquidity risk.
The foundations of modelling liquidity risk are laid out in Chapter 4. The elements in a banks’ existing balance sheet that influence the liquidity forecast are more formally introduced: the nostro,
6













“fiedler_reprint” — 2012/8/10 — 13:44 — page 7 — #21





INTRODUCTION

transactions and its cashflows as forecasts for future payments, as
well as asset flows (which can constitute collateral) and option flows
(which might institute new financial transactions). The forward liquidity exposure’s uncertainties are separated into those that depend
on the variation of the cashflows of existing transactions and those
which stem from transactions that do not yet exist. Illiquidity is measured with a basal inequality that disregards uncertainty; this is later
enhanced step by step to cover uncertainties.
In Chapter 5, uncertainties are modelled in more detail. We
explain how cashflows of existing transactions are first considered
as deterministic but are then adjusted due to influencing market
factors (variable cashflows) or due to the counterparty’s exercise of
options (contingent cashflows). The inappropriateness of the forward rates as predictors of future interest rates is discussed and we
outline a possible solution which quantifies the resulting uncertainty
(forecast-at-risk). We then explain the kind of optionality by which
the generation of hypothetical transactions is driven and the choices
that the bank faces in influencing its future balance sheet, going

beyond the established concepts of financial options and considering
breach options and rejectable options.
In Chapter 6 the different components of the analysis and conceptions are synthesised in order to describe how a bank can develop
a process that models illiquidity risk and build a solution which
systematically gathers the necessary information to measure and
manage illiquidity risk.
The main element of this solution is the simulation of scenarios. Exposure scenarios simulate situations that the bank could
encounter, without assuming that it would modify the situation,
whereas strategy scenarios describe measures which the bank
actively tries to execute in order to avert perils. We show how the creation of scenarios can be systematised to simplify and structure the
variety of scenario assumptions. All transactions that are assumed to
behave similarly in a scenario are aggregated into a liquidity, which
allows us to handle the aggregated transactions jointly instead of
individually. Liquidity units give a bird’s eye view of the balance
sheet and isolate the drivers of liquidity.
After some technical indications of how to methodically set up
portfolios and hierarchies, we extend the concept of cashflows and
inventories to asset and option flows and inventories, which enables
7













“fiedler_reprint” — 2012/8/10 — 13:44 — page 8 — #22





LIQUIDITY MODELLING

us to describe the mechanics of future balance-sheet progressions
consistently. To finalise this modelling we produce a taxonometry
which methodically explains how the exercise of various option
types can result in transactions and cash, asset and option flows.
In Chapter 7 we give a detailed plan of how the counterbalancing capacity can be implemented in practice. The abstract problem of
how the bank can mitigate illiquidity risks by engendering hypothetical transactions that create additional cash liquidity is examined and
broken down into its components. For practical reasons the problem
is then restricted to the hypothetical repo or sales of unencumbered
liquid assets. First, the bank’s ownership and possession of a security need is constructed from the existing transactions of the security
and projected into the future. Then the securities are segmented into
liquifiability classes, special liquidity units that comprise all securities that behave similarly in a scenario. The liquification algorithm
creates cash inflows in a thought experiment which goes forward in
time step by step: the securities of each liquifiability class are sold
and their reduced inventories are repoed until the next business day,
until the next portion is hypothetically sold, etc. Finally, we explain
how the counterbalancing capacity (CBC) fits technically into the
forward liquidity exposure (FLE), which assesses the problem in a
wider perspective.
In Chapter 8 the liquidity risk that has been modelled before is
enhanced by refining the time granularity from daily to continuous,
which allows integration of the payments of today into the FLE.

The liquidity risks related to the payment process which have so
far been ignored are classified and investigated, which leads to a
broader and deeper analysis of the liquidity risk measurement process. The idealised payment process which was previously sufficient
is represented in greater complexity, and information risks are considered. Risks that stem from the payment services carried out by
the bank for others, as well as the idiosyncratic risk of the payment
processes (or venues), are considered and we analyse their possible
mitigation.
In Chapter 9 we give a brief overview of practices used by
banks to price internal deals (funds transfer pricing). The most
enhanced method, individual transfer pricing of single transactions,
is described and we sketch the inclusion of components of uncertainty (premiums for different risk types). The cost effects that are
8












“fiedler_reprint” — 2012/8/10 — 13:44 — page 9 — #23






INTRODUCTION

specific to liquidity risk are modelled and integrated, allowing the
transfer-pricing process to be complemented with the upcoming
requirements that stem from adhering to the rules of Basel III.
In Chapter 10 the liquidity regulations of Basel III at the time
of writing are outlined and we investigate their impacts on banks’
forthcoming business models. We evaluate how a bank can improve
detrimental short-term and longer term liquidity ratios and the consequences for its balance sheet and its business model. The resulting cost effects are scrutinised, referring to the previous chapter on
transfer pricing.

9












“fiedler_reprint” — 2012/8/10 — 13:44 — page 10 — #24














×