Tải bản đầy đủ (.pdf) (1,423 trang)

Palgrave handbook of econometrics volume 2 applied econometrics

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (8.15 MB, 1,423 trang )


Palgrave Handbook of Econometrics


This page intentionally left blank


Palgrave Handbook of
Econometrics
Volume 2: Applied Econometrics

Edited By

Terence C. Mills
and

Kerry Patterson


Editorial and selection matter © Terence C. Mills and Kerry Patterson 2009
Individual chapters © Contributors 2009
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No paragraph of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorised act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The author has asserted his right to be identified


as the author of this work in accordance with the Copyright, Designs
and Patents Act 1988.
First published in 2009 by
PALGRAVE MACMILLAN
PALGRAVE MACMILLAN in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
PALGRAVE MACMILLAN in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
PALGRAVE MACMILLAN is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN: 978–1–4039–1799–7 hardback
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
10 9 8 7 6 5 4 3 2 1
18 17 16 15 14 13 12 11 10 09
Printed and bound in Great Britain by
CPI Antony Rowe, Chippenham and Eastbourne


Contents
Notes on Contributors

viii


Editors’ Introduction

xi

Part I The Methodology and Philosophy of Applied Econometrics
1

2
3

The Methodology of Empirical Econometric Modeling: Applied
Econometrics Through the Looking Glass
David F. Hendry, Nuffield College, Oxford University
How Much Structure in Empirical Models?
Fabio Canova, Universitat Pompeu Fabra
Introductory Remarks on Metastatistics for the Practically Minded
Non-Bayesian Regression Runner
John DiNardo, University of Michigan

3
68

98

Part II Forecasting
4

5


Forecast Combination and Encompassing
Michael P. Clements, Warwick University, and David I. Harvey,
School of Economics, University of Nottingham
Recent Developments in Density Forecasting
Stephen G. Hall, University of Leicester, and James Mitchell,
National Institute of Economic and Social Research

169

199

Part III Time Series Applications
6
7

8

9
10

Investigating Economic Trends and Cycles
D.S.G. Pollock, University of Leicester
Economic Cycles: Asymmetries, Persistence, and Synchronization
Joe Cardinale, Air Products and Chemicals, Inc., and Larry W. Taylor,
College of Business and Economics, Lehigh University
The Long Swings Puzzle: What the Data Tell When Allowed to
Speak Freely
Katarina Juselius, University of Copenhagen
Structural Time Series Models for Business Cycle Analysis
Tommaso Proietti, University of Rome ‘Tor Vergata’

Fractional Integration and Cointegration: An Overview and an
Empirical Application
Luis A. Gil-Alana and Javier Hualde, Universidad de Navarra

v

243
308

349
385

434


vi

Contents

Part IV
11
12
13

Discrete Choice Modeling
William Greene, Stern School of Business, New York University
Panel Data Methods and Applications to Health Economics
Andrew M. Jones, University of York
Panel Methods to Test for Unit Roots and Cointegration
Anindya Banerjee, University of Birmingham, and Martin Wagner,

Institute forAdvanced Studies, Vienna

Part V
14
15

Cross-section and Panel Data Applications
473
557
632

Microeconometrics

Microeconometrics: Current Methods and Some Recent Developments
A. Colin Cameron, University of California, Davis
Computational Considerations in Empirical Microeconometrics:
Selected Examples
David T. Jacho-Chávez and Pravin K. Trivedi, Indiana University

729

775

Part VI Applications of Econometrics to Economic Policy
16
17

18

The Econometrics of Monetary Policy: An Overview

Carlo Favero, IGIER-Bocconi University
Macroeconometric Modeling for Policy
Gunnar Bårdsen, Norwegian University of Science and Technology,
and Ragnar Nymoen, University of Oslo
Monetary Policy, Beliefs, Unemployment and Inflation: Evidence
from the UK
S.G.B. Henry, National Institute of Economic and Social Research

821
851

917

Part VII Applications to Financial Econometrics
19

20

21
22

Estimation of Continuous-Time Stochastic Volatility Models
George Dotsis, Essex Business School, University of Essex,
Raphael N. Markellos, Athens University of Economics and Business,
and Terence C. Mills, Loughborough University
Testing the Martingale Hypothesis
J. Carlos Escanciano, Indiana University, and Ignacio N. Lobato,
Instituto Tecnológico Autónomo de Mexico
Autoregressive Conditional Duration Models
Ruey S. Tsay, Booth Business School, University of Chicago

The Econometrics of Exchange Rates
Efthymios G. Pavlidis, Ivan Paya, and David A. Peel,
Lancaster University Management School

951

972

1004
1025


Contents

vii

Part VIII Growth Development Econometrics
23

24

25

The Econometrics of Convergence
Steven N. Durlauf, University of Wisconsin-Madison,
Paul A. Johnson, Vassar College, New York State, and
Jonathan R.W. Temple, Bristol University
The Methods of Growth Econometrics
Steven N. Durlauf, University of Wisconsin-Madison,
Paul A. Johnson, Vassar College, New York State, and

Jonathan R.W. Temple, Bristol University
The Econometrics of Finance and Growth
Thorsten Beck, European Banking Center, Tilburg University, and CEPR

Part IX
26

27

28
29

1119

1180

Spatial Econometrics

Spatial Hedonic Models
Luc Anselin, School of Geographical Sciences and Urban Planning,
and Nancy Lozano-Gracia, GeoDa Center for Geospatial
Analysis and Computation, Arizona State University
Spatial Analysis of Economic Convergence
Sergio J. Rey, Arizona State University, and Julie Le Gallo,
Université de Franche-Comté

Part X

1087


1213

1251

Applied Econometrics and Computing

Testing Econometric Software
B.D. McCullough, Drexel University
Trends in Applied Econometrics Software Development 1985–2008:
An Analysis of Journal of Applied Econometrics Research Articles,
Software Reviews, Data and Code
Marius Ooms, VU University Amsterdam

1293

1321

Author Index

1349

Subject Index

1374


Notes on Contributors
Luc Anselin is Foundation Professor of Geographical Sciences and Director
of the School of Geographical Sciences and Urban Planning at Arizona State
University, USA.

Aninyda Banerjee
Birmingham, UK.

is

Professor

of

Econometrics

at

the

University

of

Gunnar Bårdsen is Professor of Economics at the Norwegian University of Science
and Technology, Norway.
Thorsten Beck is Professor of Economics and Chair at the European Banking
Center, Tilburg University and Research Fellow, CEPR.
Colin Cameron is Professor of Economics at the University of California,
Davis, USA.
Fabio Canova is ICREA Research Professor in Social Science at Universitat Pompeu
Fabra, Barcelona, Spain.
Joe Cardinale is a Manager, Economics at Air Products and Chemicals, Inc., USA.
Michael P. Clements is Professor of Economics at Warwick University, UK.
John DiNardo is Professor of Economics and Public Policy at the University of

Michigan, Ann Arbor, USA.
George Dotsis is Lecturer in Finance at the Essex Business School, University of
Essex, UK.
Steven N. Durlauf is Professor of Economics at the University of WisconsinMadison, USA.
Juan Carlos Escanciano is Assistant Professor of Economics at Indiana University,
Bloomington, USA.
Carlo A. Favero is Professor of Economics at IGIER-Bocconi University, Italy.
Julie Le Gallo is Professor of Economics and Econometrics at the Université de
Franche-Comté, France.
Luis A. Gil-Alana is Professor of Econometrics at the University of Navarra, Spain.
William Greene is Professor of Economics at the Stern School of Business,
New York, USA.
Stephen G. Hall is Professor of Economics at University of Leicester, UK.
David I. Harvey is Reader in Econometrics at the School of Economics, University
of Nottingham, UK.
David F. Hendry is Professor of Economics and Fellow, Nuffield College, Oxford
University, UK.

viii


Notes on Contributors

ix

Brian Henry is Visiting Fellow at the National Institute of Economic and Social
Research, NIESR, UK.
Javier Hualde is Ramon y Cajal Research Fellow in Economics at the Public
University of Navarra, Spain.
David Jacho-Chávez is Assistant Professor of Economics at Indiana University,

Bloomington, USA.
Paul A. Johnson is Professor of Economics at Vassar College, New York State, USA.
Andrew M. Jones is Professor of Economics at the University of York, UK.
Katarina Juselius is Professor of Empirical Time Series Econometrics at the
University of Copenhagen, Denmark.
Ignacio N. Lobato is Professor of Econometrics at the Instituto Tecnológico
Autónomo de México, Mexico.
Nancy Lozano-Gracia is Postdoctoral Research Associate in the GeoDa Center for
Geospatial Analysis and Computation at Arizona State University, USA.
Raphael N. Markellos is Assistant Professor of Quantitative Finance at the Athens
University of Economics and Business (AUEB), Greece.
Bruce D. McCullough is Professor of Decision Sciences and Economics at Drexel
University, Philadelphia, USA.
Terence C. Mills is Professor of Applied Statistics and Econometrics at
Loughborough University, UK.
James Mitchell is Research Fellow at the National Institute of Economic and Social
Research, UK.
Ragnar Nymoen is Professor of Economics at University of Oslo, Norway.
Marius Ooms is Associate Professor of Econometrics at the VU University,
Amsterdam, The Netherlands.
Kerry Patterson is Professor of Econometrics at the University of Reading, UK.
Efthymios G. Pavlidis is Lecturer in Economics at the Lancaster University
Management School, Lancaster University, UK.
Ivan Paya is Senior Lecturer in Economics at the Lancaster University Management
School, Lancaster University, UK.
David A. Peel is Professor in Economics at the Lancaster University Management
School, Lancaster University, UK.
D. Stephen G. Pollock is Professor of Economics at the University of Leicester, UK.
Tommaso Proietti is Professor of Economic Statistics at the University of Rome
‘Tor Vergata’, Italy.

Sergio Rey is Professor of Geographical Sciences at Arizona State University, USA.
Larry W. Taylor is Professor of Economics at the College of Business and
Economics, Lehigh University, Pennsylvania, USA.


x Notes on Contributors

Jonathan R.W. Temple is Professor of Economics at Bristol University, UK.
Pravin Trivedi is Professor of Economics at Indiana University, Bloomington, USA.
Ruey S. Tsay is Professor of Econometrics and Statistics at the University of Chicago
Booth School of Business, USA.
Martin Wagner is Senior Economist at the Institute for Advanced Studies in
Vienna, Austria.


Editors’ Introduction
Terence C. Mills and Kerry Patterson

The Palgrave Handbook of Econometrics was conceived to provide an understanding of major developments in econometrics, both in theory and in application.
Over the last twenty-five years or so, econometrics has grown in a way that few
could have contemplated, and it became clear to us, as to others, that no single person could have command either of the range of technical knowledge that
underpins theoretical econometric developments or the extent of the application
of econometrics. In short, econometrics is not, as it used to be considered, a set of
techniques that is applied to a previously well-defined problem in economics; it is
not a matter of finding the “best” estimator from a field of candidates, applying
that estimator and reporting the results. The development of economics is now
inextricably entwined with the development of econometrics.
The first Nobel Prize in Economics was awarded to Ragnar Frisch and Jan Tinbergen, both of whom made significant contributions to what we now recognize as
applied econometrics. More recently, Nobel Prizes in Economics have been awarded
to Clive Granger, Robert Engle, James Heckman and Daniel McFadden, who have

all made major contributions to applied econometrics. It is thus clear that the discipline has recognized the influential role of econometrics, both theoretical and
applied, in advancing economic knowledge.
The aim of this volume is to make major developments in applied econometrics accessible to those outside their particular field of specialization. The response
to Volume 1 was universally encouraging and it has become clear that we were
fortunate to be able to provide a source of reference for others for many years to
come. We hope that this high standard is continued and achieved here. Typically,
applied econometrics, unlike theoretical econometrics, has always been rather
poorly served for textbooks, making it difficult for both undergraduate and postgraduate students to get a real “feel” for how econometrics is actually done. To
some degree, the econometric textbook market has responded, so that now the
leading textbooks include many examples; even so, these examples typically are
of an illustrative nature, focusing on simple points, simply exposited, rather than
on the complexity that is revealed in practice. Thus our hope is that this volume will provide a genuine entry into the detailed considerations that have to be

xi


xii Editors’ Introduction

made when combining economics and econometrics in order to carry out serious
empirical research.
As in the case of Volume 1, the chapters here have been specially commissioned
from acknowledged experts in their fields; further, each of the chapters has been
reviewed by the editors, one or more of the associate editors and a referee. Thus,
the process is akin to submission to a journal; however, whilst ensuring the highest
standards in the evaluation process, the chapters have been conceived of as part
of a whole rather than as a set of unrelated contributions. It has not, however,
been our intention to provide just a series of surveys or overviews of some areas of
applied econometrics, although the survey element is directly or indirectly served
in part here. By its very nature, this volume is about econometrics as it is applied
and, to succeed in its aim, the contributions, conceived as a whole, have to meet

this goal.
We have organized the chapters of this volume of the Handbook into ten parts.
The parts are certainly not watertight, but serve as a useful initial organization of
the central themes. Part I contains three chapters under the general heading of
“The Methodology and Philosophy of Applied Econometrics.” The lead chapter is
by David Hendry, who has been making path-breaking contributions in theoretical
and applied econometrics for some forty years or so. It is difficult to conceive how
econometrics would have developed without David’s many contributions. This
chapter first places the role of applied econometrics in an historical context and
then develops a theory of applied econometrics. As might be expected, the key
issues are confronted head-on.
In introducing the first volume we noted that the “growth in econometrics is to
be welcomed, for it indicates the vitality and importance of the subject. Indeed,
this growth and, arguably, the dominance over the last ten or twenty years of
econometric developments in taking economics forward, is a notable change from
the situation faced by the subject some twenty-five years or so ago.” Yet in Chapter
1, Hendry notes that, next to data measurement, collection and preparation, on the
one hand, and teaching, on the other, “Applied Econometrics” does not have a high
credibility in the profession. Indeed, whilst courses in theoretical econometrics or
econometric techniques are de rigueur for a good undergraduate or Masters degree,
courses in applied econometrics have no such general status.
The intricacies, possibly even alchemy (Hendry, 1980), surrounding the mix of
techniques and data seem to defy systematization; perhaps they should be kept
out of the gaze of querulous students, who may – indeed should – be satisfied with
illustrative examples! Often to an undergraduate or Masters student undertaking a
project, applied econometrics is the application of econometrics to data, no more,
no less, with some relief if the results are at all plausible. Yet, in contrast, leading
journals, for example, the Journal of Econometrics, the Journal of Applied Econometrics and the Journal of Business and Economic Statistics, and leading topic journals,
such as the Journal of Monetary Economics, all publish applied econometric articles
having substance and longevity in their impact and which serve to change the

direction of the development of econometric theory (for a famous example, see
Nelson and Plosser, 1982). To some, applying econometrics seems unsystematic


Terence C. Mills and Kerry Patterson

xiii

and so empirical results are open to question; however, as Hendry shows, it is
possible to formalize a theory of applied econometrics which provides a coherent basis for empirical work. Chapter 1 is a masterful and accessible synthesis and
extension of Hendry’s previous ideas and is likely to become compulsory reading
for courses in econometrics, both theory and applied; moreover, it is completed by
two applications using the Autometrics software (Doornik, 2007). The first application extends the work of Magnus and Morgan (1999) on US food expenditure,
which was itself an update of a key study by Tobin (1950) estimating a demand
function for food. This application shows the Autometrics algorithm at work in a
simple context. The second application extends the context to a multiple equation
setting relating industrial output, the number of bankruptcies and patents, and real
equity prices. These examples illustrate the previously outlined theory of applied
econometrics combined with the power of the Autometrics software.
In Chapter 2, Fabio Canova addresses the question of how much structure there
should be in empirical models. This has long been a key issue in econometrics, and
some old questions, particularly those of identification and the meaning of structure, resurface here in a modern context. The last twenty years or so have seen
two key developments in macroeconometrics. One has been the development of
dynamic stochastic general equilibrium (DSGE) models. Initially, such models were
calibrated rather than estimated, with the emphasis on “strong” theory in their
specification; however, as Canova documents, more recently likelihood-based estimation has become the dominant practice. The other key development has been
that of extending the (simple) vector autoregression (VAR) to the structural VAR
(SVAR) model. Although both approaches involve some structure, DSGE models,
under the presumption that the model is correct, rely more on an underlying theory than do SVARs. So which should be used to analyze a particular set of problems?
As Canova notes: “When addressing an empirical problem with a finite amount of

data, one has . . . to take a stand on how much theory one wants to use to structure
the available data prior to estimation.” Canova takes the reader through the advantages and shortcomings of these methodologies; he provides guidance on what to
do, and what not to do, and outlines a methodology that combines elements of
both approaches.
In Chapter 3, John DiNardo addresses some philosophical issues that are at
the heart of statistics and econometrics, but which rarely surface in econometric textbooks. As econometricians, we are, for example, used to working within
a probabilistic framework, but we rarely consider issues related to what probability actually is. To some degree, we have been prepared to accept the axiomatic
or measure theoretic approach to probability, due to Kolgomorov, and this has
provided us with a consistent framework that most are happy to work within.
Nevertheless, there is one well-known exception to this unanimity: when it comes
to the assignment and interpretation of probability measures and, in particular, the
interpretation of some key conditional probabilities; this is whether one adopts a
Bayesian or non-Bayesian perspective. In part, the debate that DiNardo discusses
relates to the role of the Bayesian approach, but it is more than this; it concerns
metastatistics and philosophy, because, in a sense, it relates to a discussion of the


xiv Editors’ Introduction

theory of theories. This chapter is deliberately thought-provoking and certainly
controversial – two characteristics that we wish to encourage in a Handbook that
aims to be more than just an overview. For balance, the reader can consult Volume
1 of the Handbook, which contains two chapters devoted to the Bayesian analysis of
econometric models (see Poirier and Tobias, 2006, and Strachan et al., 2006). The
reader is likely to find familiar concepts here, such as probability and testing, but
only as part of a development that takes them into potentially unfamiliar areas.
DiNardo’s discussion of these issues is wide-ranging, with illustrations taken from
gambling and practical examples taken as much from science, especially medicine,
as economics. One example from the latter is the much-researched question of the
causal effect of union status on wages: put simply, do unions raise wages and, if so,

by how much? This example serves as an effective setting in which to raise issues
and to show that differences in approach can lead to differences in results.
For some, the proof of the pudding in econometrics is the ability to forecast
accurately, and to address some key issues concerning this aspect of econometrics Part II contains two chapters on forecasting. The first, Chapter 4, by Michael
Clements and David Harvey, recognizes that quite often several forecasts are available and, rather than considering a selection strategy that removes all but the best
on some criterion, it is often more fruitful to consider different ways of combining
forecasts, as suggested in the seminal paper by Bates and Granger (1969). In an
intuitive sense, one forecast may be better than another, but there could still be
some information in the less accurate forecast that is not contained in the more
accurate forecast. This is a principle that is finding wider application; for example,
in some circumstances, as in unit root testing, there is more than one test available
and, indeed, there may be one uniformly powerful test, yet there is still potential
merit in combining tests.
In the forecasting context, Clements and Harvey argue that the focus for multiple forecasts should not be on testing the null of equal accuracy, but on testing
for encompassing. Thus it is not a question of choosing forecast A over forecast B,
but of whether the combination of forecasts A and B is better than either individual forecast. Of course, this may be of little comfort from a structuralist point of
view if, for example, the two forecasts come from different underlying models; but
it is preferable when the loss function rewards good fit in some sense. Bates and
Granger (1969) suggested a simple linear combination of two unbiased forecasts,
with weights depending on the relative accuracy of the individual forecasts, and
derived the classic result that, even if the forecasts are equally accurate in a mean
squared error loss sense, then there will still be a gain in using the linear combination unless the forecasts are perfectly correlated, at least theoretically. Clements and
Harvey develop from this base model, covering such issues as biased forecasts, nonlinear combinations, and density or distribution forecasts. The concept of forecast
encompassing, which is not unique in practice, is then considered in detail, including complications arising from integrated variables, non-normal errors, serially
correlated forecast errors, ARCH errors, the uncertainty implied by model estimation, and the difficulty of achieving tests with the correct actual size. A number of
recent developments are examined, including the concept of conditional forecast


Terence C. Mills and Kerry Patterson


xv

evaluation (Giacomini and White, 2006), evaluating quantile forecasts, and relaxing the forecast loss function away from the traditional symmetric squared error.
In short, this chapter provides a clear, critical and accessible evaluation of a rapidly
developing area of the econometrics literature.
Chapter 5 is by Stephen Hall and James Mitchell, who focus on density forecasting. There has been a great deal of policy interest in forecasting key macroeconomic
variables such as output growth and inflation, some of which has been institutionally enshrined by granting central banks independence in inflation targeting. In
turn, there has been a movement away from simply reporting point forecasts of
inflation and GDP growth in favor of a fan chart representation of the distribution
of forecasts. A density forecast gives much more information than a simple point
forecast, which is included as just one realization on the outcome axis. As a corollary, forecast evaluation should also include techniques that evaluate the accuracy,
in some well-defined sense, of the density forecast. However, given that generally
we will only be able to observe one outcome (or event) per period, some thought
needs to be given to how the distributional aspect of the forecast is evaluated. Hall
and Mitchell discuss a number of possibilities and also consider methods of evaluating competing density forecasts. A further aspect of density forecasting is the
ability of the underlying model to generate time variation in the forecast densities. If the underlying model is a VAR, or can be approximated by a VAR, then,
subject to some qualifications, the only aspect of the forecast density which is able
to exhibit time variation is the mean; consequently, models have been developed
that allow more general time variation in the density through, for example, ARCH
and GARCH errors and time-varying parameters. This chapter also links in with the
previous chapter by considering combinations of density forecasts. There are two
central possibilities: the linear opinion pool is a weighted linear combination of
the component densities, whereas the logarithmic opinion pool is a multiplicative
combination. Hall and Mitchell consider the problem of determining the weights
in such combinations and suggest that predictive accuracy improves when the
weights reflect shifts in volatility, a characteristic of note for the last decade or so
in a number of economies.
Part III contains four chapters under the general heading of “Time Series Applications.” A key area in which the concept of a time series is relevant is in
characterizing and determining trends and cycles. Chapter 6, by Stephen Pollock,
is a tour de force on modeling trends and cycles, and on the possibilities and

pitfalls inherent in the different approaches. In the simplest of models, cyclical
fluctuations are purely sinusoidal and the trend is exponential; although simple,
this is a good base from which to understand the nature of developments that
relax these specifications. Such developments include the view that economic time
series evolve through the accumulation of stochastic shocks, as in an integrated
Weiner process. The special and familiar cases of the Beveridge–Nelson decomposition, the Hodrick–Prescott filter, the Butterworth filter and the unifying place of
Weiner–Kolgomorov filtering are all covered with admirable clarity. Other considerations include the complications caused by the limited data that is often available
in economic applications, contrary to the convenient assumptions of theory. In an


xvi

Editors’ Introduction

appealing turn of phrase, Pollock refers to obtaining a decomposition of components based on the periodogram “where components often reside within strictly
limited frequency bands which are separated by dead spaces where the spectral
ordinates are virtually zeros.” The existence of these “spectral dead spaces” is key
to a practical decomposition of an economic time series, however achieved. In
practice, trend fitting requires judgment and a clear sense of what it is that the
trend is capturing. Other critical issues covered in this chapter include the importance of structural breaks, a topic that has been influential elsewhere (for example,
in questioning the results of unit root testing: Perron, 1989); and to aid the reader,
practical examples are included throughout the exposition.
Chapter 7, by Joe Cardinale and Larry Taylor, continues the time series theme of
analyzing economic cycles whilst focusing on asymmetries, persistence and synchronization. This is a particularly timely and somewhat prophetic chapter given
that we are currently experiencing what is perhaps the deepest recession in recent
economic history. How can we analyze the critical question “When will it end?”
This chapter provides the analytical and econometric framework to answer such a
question. The central point is that cycles are much more interesting than just marking their peaks and troughs would suggest. Whilst “marking time” is important, it
is just the first part of the analysis, and should itself be subjected to methods for distinguishing phases (for example, expansions and contractions of the output cycle).
Once phases have been distinguished, their duration and characteristics become

of interest; for example, do long expansions have a greater chance of ending than
short expansions? Critical to the analysis is the hazard function: “the conditional
probability that a phase will terminate in period t, given that it has lasted t or more
periods.” Cardinale and Taylor consider different models and methods of estimating the hazard function and testing hypotheses related to particular versions of it.
They also consider tests of duration dependence, the amplitudes of cycles, and the
synchronization of cycles for different but related variables; for example, output
and unemployment. Their theoretical analysis is complemented with a detailed
consideration of US unemployment.
No handbook of econometrics could be without a contribution indicating the
importance of cointegration analysis for non-stationary data. In Chapter 8, Katerina Juselius considers one of the most enduring puzzles in empirical economics,
namely, if purchasing power parity (PPP) is the underlying equilibrium state that
determines the relationship between real exchange rates, why is there “pronounced
persistence” away from this equilibrium state? This has been a common finding of
empirical studies using data from a wide range of countries and different sample
periods. Juselius shows how a careful analysis can uncover important structures in
the data; however, these structures are only revealed by taking into account the
different empirical orders of integration of the component variables, the identification of stationary relationships between non-stationary variables, the dynamic
adjustment of the system to disequilibrium states, the appropriate deterministic
components, and the statistical properties of the model. As Juselius notes, and
in contrast to common approaches, the order of integration is regarded here as
an empirical approximation rather than a structural parameter. This opens up a


Terence C. Mills and Kerry Patterson

xvii

distinction between, for example, a variable being empirically I(d) rather than
structurally I(d); a leading example here is the I(2) case which, unlike the I(1)
case, has attracted some “suspicion” when applied in an absolute sense to empirical series. The challenging empirical case considered by Juselius is the relationship

between German and US prices and nominal exchange rates within a sample that
includes the period of German reunification. The methodology lies firmly within
the framework of general-to-specific modeling, in which a general unrestricted
model is tested down (see also Hendry, Chapter 1) to gain as much information
without empirical distortion. A key distinction in the methodological and empirical analysis is between pushing and pulling forces: in the current context, prices
push whereas the exchange rate pulls. PPP implies that there should be just a single stochastic trend in the data, but the empirical analysis suggests that there are
two, with the additional source of permanent shocks being related to speculative
behaviour in the foreign exchange market.
In an analysis of trends and cycles, economists often characterize the state of
the economy in terms of indirect or latent variables, such as the output gap, core
inflation and the non-accelerating rate of inflation (NAIRU). These are variables
that cannot be measured directly, but are nevertheless critical to policy analysis.
For example, the need to take action to curb inflationary pressures is informed by
the expansionary potential in the economy; whether or not a public sector budget deficit is a matter for concern is judged by reference to the cyclically adjusted
deficit. These concepts are at the heart of Chapter 9 by Tommaso Proietti, entitled
“Structural Time Series Models for Business Cycle Analysis,” which links with the
earlier chapters by Pollock and Cardinale and Taylor. Proietti focuses on the measurement of the output gap, which he illustrates throughout using US GDP. In the
simplest case, what is needed is a framework for decomposing a time series into a
trend and cycle and Proietti critically reviews a number of methods to achieve such
a decomposition, including the random walk plus noise (RWpN) model, the local
linear trend model (LLTM), methods based on filtering out frequencies associated
with the cycle, multivariate models that bring together related macroeconomic
variables, and the production function approach. The estimation and analysis of a
number of models enables the reader to see how the theoretical analysis is applied
and what kind of questions can be answered. Included here are a bivariate model
of output and inflation for the US and a model of mixed data frequency, with quarterly observations for GDP and monthly observations for industrial production, the
unemployment rate and CPI inflation. The basic underlying concepts, such as the
output gap and core inflation, are latent variables and, hence, not directly observable: to complete the chapter, Proietti also considers how to judge the validity of
the corresponding empirical measures of these concepts.
To complete the part of the Handbook on Times Series Applications, in Chapter

10 Luis Gil-Alana and Javier Hualde provide an overview of fractional integration
and cointegration, with an empirical application in the context of the PPP debate.
A time series is said to be integrated of order d, where d is an integer, if d is the minimum number of differences necessary to produce a stationary time series. This is
a particular form of non-stationarity and one which dominated the econometrics


xviii

Editors’ Introduction

literature in the 1980s and early 1990s, especially following the unit root literature. However, the integer restriction on d is not necessary to the definition of an
integrated series (see, in particular, Granger and Joyeux, 1980), so that d can be a
fraction – hence the term “fractionally integrated” for such series. Once the integer
restriction is relaxed for a single series, it is then natural to relax it for the multivariate case, which leads to the idea of fractional cointegration. Gil-Alana and Hualde
give an overview of the meaning of fractional integration and fractional cointegration, methods of estimation for these generalized cases, which can be approached
in either the time or frequency domains, the underlying rationale for the existence
of fractionally integrated series (for example, through the aggregation of microrelationships), and a summary of the empirical evidence for fractionally integrated
univariate series and fractionally cointegrated systems of series. The various issues
and possible solutions are illustrated in the context of an analysis of PPP for four
bivariate series. It is clear that the extension of integration and cointegration to
their corresponding fractional cases is not only an important generalization of the
theory, but one which finds a great deal of empirical support.
One of the most significant developments in econometrics over the last twenty
years or so has been the increase in the number of econometric applications involving cross-section and panel data (see also Ooms, Chapter 29). Hence Part IV is
devoted to this development. One of the key areas of application is to choice situations which have a discrete number of options; examples include the “whether
to purchase” decision, which has wide application across consumer goods, and the
“whether to participate” decision, as in whether to enter the labor force, to retire, or
to join a club. Discrete choice models are the subject of Chapter 11 by Bill Greene,
who provides a critical, but accessible, review of a vast literature. The binary choice
model is a key building block here and so provides a prototypical model with which

to examine such topics as specification, estimation and inference; it also allows the
ready extension to more complex models such as bivariate and multivariate binary
choice models and multinomial choice models. Models involving count data are
also considered as they relate to the discrete choice framework. A starting point
for the underlying economic theory is the extension of the classical theory of consumer behavior, involving utility maximization subject to a budget constraint, to
the random utility model. The basic model is developed from this point and a host
of issues are considered that arise in practical studies, including estimation and
inference, specification tests, measuring fit, complications from endogenous righthand-side variables, random parameters, the use of panel data, and the extension
of the familiar fixed and random effects. To provide a motivating context, Greene
considers an empirical application involving a bivariate binary choice model. This
is where two binary choice decisions are linked; in this case, in the first decision
the individual decides whether to visit a physician, which is a binary choice, and
the second involves whether to visit the hospital, again a binary choice: together
they constitute a bivariate (and ordered) choice. An extension of this model is to
consider the number of times that an individual visits the doctor or a hospital. This
gives rise to a counts model (the number of visits to the doctor and the number of
visits to the hospital) with its own particular specification. Whilst a natural place to


Terence C. Mills and Kerry Patterson

xix

start is the Poisson model, this, as Greene shows, is insufficient as a general framework; the extension is provided and illustrated with panel data from the German
health care system. A second application illustrates a mixed logit and error components framework for modeling modes of transport choice (air, train, bus, car).
Overall, this chapter provides an indication, through the variety of its applications,
as to why discrete choice models have become such a significant part of applied
econometrics.
The theme of panel data methods and applications is continued in Chapter 12
by Andrew Jones. The application of econometrics to health economics has been

an important area of development over the last decade or so. However, this has not
just been a case of applying existing techniques: rather, econometrics has been able
to advance the subject itself, asking questions that had not previously been asked
– and providing answers. This chapter will be of interest not only to health economics specialists, but also to those seeking to understand how treatment effects in
particular are estimated and to those investigating the extent of the development
and application of panel data methods (it is complemented by Colin Cameron
in Chapter 14). At the center of health economics is the question “What are the
impacts of specific health policies?” Given that we do not observe experimental
data, what can we learn from non-experimental data? Consider the problem of
evaluating a particular treatment; for an individual, the treatment effect is the difference in outcome between the treated and the control, but since an individual is
either treated or not at a particular time, the treatment effect cannot be observed.
“Treatment” is here a general term that covers not only single medical treatments
but also broad policies, and herein lies its generality, since a treatment could equally
be a policy to reduce unemployment or to increase the proportion of teenagers
receiving higher education. In a masterful understanding of a complex and expanding literature, Jones takes the reader through the theoretical and practical solutions
to the problems associated with estimating and evaluating treatment effects, covering, inter alia, identification strategies, dynamic models, estimation methods,
different kinds of data, and multiple equation models; throughout the chapter
the methods and discussion are motivated by practical examples illustrating the
breadth of applications.
A key development in econometrics over the last thirty years or so has been the
attention given to the properties of the data, as these enlighten the question of
whether the underlying probability structure is stationary or not. In a terminological shorthand, we refer to data that is either stationary or non-stationary. Initially,
this was a question addressed to individual series (see Nelson and Plosser, 1982);
subsequently, the focus expanded, through the work of Engle and Granger (1987)
and Johansen (1988), to a multivariate approach to non-stationarity. The next
step in the development was to consider a panel of multivariate series. In Chapter
13, Anindya Banerjee and Martin Wagner bring us up to date by considering panel
methods to test for unit roots and cointegration. The reader will find in this chapter
a theoretical overview and critical assessment of a vast and growing body of methods, combined with practical recommendations based on the insights obtained
from a wide base of substantive applications. In part, as is evident in other areas



xx Editors’ Introduction

of econometric techniques and applications, theory has responded to the much
richer sources of data that have become available, not only at a micro or individual level, as indicated in Chapter 12, combined with increases in computing
power. As Banerjee and Wagner note, we now have long time series on macroeconomic and industry-level data. Compared to just twenty years ago, there is thus a
wealth of data on micro, industry and macro-panels. A panel dataset embodies two
dimensions: the cross-section dimension and the time-series dimension, so that,
in a macro-context, for example, we can consider the question of convergence not
just of a single variable (say, of a real exchange rate to a comparator, be that a
PPP hypothetical or an alternative actual rate), but of a group of variables, which
is representative of the multidimensional nature of growth and cycles. A starting
point for such an analysis is to assess the unit root properties of panel data but,
as in the univariate case, issues such as dependency, the specification of deterministic terms, and the presence of structural breaks are key practical matters that, if
incorrectly handled, can lead to misleading conclusions. Usually, the question of
unit roots is a precursor to cointegration analysis, and Banerjee and Wagner guide
the reader through the central methods, most of which have been developed in
the last decade. Empirical illustrations, based on exchange rate pass-through in
the euro-area and the environmental Kuznets curve, complement the theoretical
analysis.
Whilst the emphasis in Chapter 13 is on panels of macroeconomic or industrylevel data, in Chapter 14, Colin Cameron, in the first of two chapters in Part
V, provides a survey of microeconometric methods, with an emphasis on recent
developments. The data underlying such developments are at the level of the
individual, households and firms. A prototypical question in microeconometrics
relates to the identification, estimation and evaluation of marginal effects using
individual-level data; for example, the effect on earnings of an additional year of
education. This example is often used to motivate some basic estimation methods, such as least squares, maximum likelihood and instrumental variables, in
undergraduate and graduate texts in econometrics, so it is instructive to see how
recent developments have extended these methods. The development of the basic

methods include generalized method of moments (GMM), empirical likelihood,
simulation-based methods, quantile regression and nonparametric and semiparametric estimation, whilst developments in inference include robustifying standard
tests and bootstrap methods. Apart from estimation and inference, Cameron considers a number of other issues that occur frequently in microeconometric studies:
in particular, issues related to causation, as in estimating and evaluating treatment
effects; heterogeneity, for example due to regressors or unobservables; and the
nature of microeconometric data, such as survey data and the sampling scheme,
with problems such as missing data and measurement error.
The development of econometrics in the last decade or so in particular has been
symbiotic with the development of advances in computing, particularly that of personal computers. In Chapter 15, David Jacho-Chávez and Pravin Trivedi focus on
the relationship between empirical microeconometrics and computational considerations, which they call, rather evocatively, a “matrimony” between computing


Terence C. Mills and Kerry Patterson

xxi

and applied econometrics. No longer is it the case that the mainstay of empirical
analysis is a set of macroeconomic time series, often quite limited in sample period.
Earlier chapters in this part of the volume emphasize that the data sources now
available are much richer than this, both in variety and length of sample period.
As Jacho-Chávez and Trivedi note, the electronic recording and collection of data
has led to substantial growth in the availability of census and survey data. However,
the nature of the data leads to problems that require theoretical solutions: for example, problems of sample selection, measurement errors and missing or incomplete
data. On the computing side, the scale of the datasets and estimation based upon
them implies that there must be reliability in the high-dimensional optimization
routines required by the estimation methods and an ability to handle large-scale
Monte Carlo simulations. The increase in computing power has meant that techniques that were not previously feasible, such as simulation assisted estimation
and resampling, are now practical and in widespread use. Moreover, nonparametric and semiparametric methods that involve the estimation of distributions rather
than simple parameters, as in regression models, have been developed through
drawing on the improved power of computers. Throughout the chapter, JachoChávez and Trivedi motivate their discussion by the use of examples of practical

interest, including modeling hedonic prices of housing attributes, female labor
force participation, Medicare expenditure, and number of doctor visits. Interestingly, they conclude that there are important problems, particularly those related
to assessing public policy, such as identification and implementation in the context of structural, dynamic and high-dimensional models, which remain to be
solved.
In Part VI, the theme of the importance of economic policy is continued, but
with the emphasis now on monetary policy and macroeconomic policy, which
remain of continued importance. Starting in the 1970s and continuing into the
1990s, the development of macroeconometric models for policy purposes was a
highly regarded area; during that period computing power was developing primarily through mainframe computers, allowing not so much the estimation as the
simulation of macroeconomic models of a dimension that had not been previously
contemplated. Government treasuries, central banks and some non-governmental
agencies developed their own empirical macro-models comprising hundreds of
equations. Yet, these models failed to live up to their promise, either wholly or in
part. For some periods there was an empirical failure, the models simply not being
good enough; but, more radically, the theoretical basis of the models was often
quite weak, at least relative to the theory of the optimizing and rational agent and
ideas of intertemporal general equilibrium.
In Chapter 16, Carlo Favero expands upon this theme, especially as it relates to
the econometrics of monetary policy and the force of the critiques by Lucas (1976)
and Sims (1980). A key distinction in the dissection of the modeling corpse is
between structural identification and statistical identification. The former relates to
the relationship between the structural parameters and the statistical parameters in
the reduced form, while the latter relates to the properties of the statistical or empirical model which represents the data. Typically, structural identification is achieved


xxii Editors’ Introduction

by parametric restrictions seeking to classify some variables as “exogenous,” a task
that some have regarded as misguided (or indeed even “impossible”). Further, a
failure to assess the validity of the reduction process in going from the (unknown)

data-generating process to a statistical representation, notwithstanding criticisms
related to structural identification, stored up nascent empirical failure awaiting the
macreconometric model. Developments in cointegration theory and practice have
“tightened” up the specification of empirical macromodels, and DSGE models, preferred theoretically by some, have provided an alternative “modellus operandi.”
Subsequently, the quasi-independence of some central banks has heightened the
practical importance of questions such as “How should a central bank respond to
shocks in macroeconomic variables?” (Favero, Chapter 16). In practice, although
DSGE models are favored for policy analysis, in their empirical form the VAR
reappears, but with their own set of issues. Favero considers such practical developments as calibration and model evaluation, the identification of shocks, impulse
responses, structural stability of the parameters, VAR misspecification and factor
augmented VARs. A summary and analysis of Sims’ (2002) small macroeconomic
model (Appendix A) helps the reader to understand the relationship between an
optimizing specification and the resultant VAR model.
In Chapter 17, Gunnar Bårdsen and Ragnar Nymoen provide a paradigm for
the construction of a dynamic macroeconometric model, which is then illustrated with a small econometric model of the Norwegian economy that is used
for policy analysis. Bårdsen and Nymoen note the two central critiques of “failed”
macroeconometric models: the Lucas (1976) critique and the Clements and Hendry
(1999) analysis of forecast failure involving “location” shifts (rather than behavioral parameter shifts). But these critiques have led to different responses; first, the
move to explicit optimizing models (see Chapter 16); and, alternatively, to greater
attention to the effects of regime shifts, viewing the Lucas critique as a possibility
theorem rather than a truism (Ericsson and Irons, 1995). Whilst it is de rigueur
to accept that theory is important, Bårdsen and Nymoen consider whether “theory” provides the (completely) correct specification or whether it simply provides a
guideline for the specification of an empirical model. In their approach, the underlying economic model is nonlinear and specified in continuous time; hence, the
first practical steps are linearization and discretization, which result in an equilibrium correction model (EqCM). Rather than remove the data trends, for example
by applying the HP filter, the common trends are accounted for through a cointegration analysis. The approach is illustrated step by step by building a small-scale
econometric model of the Norwegian economy, which incorporates the ability to
analyze monetary policy; for example, an increase in the market rate, which shows
the channels of the operation of monetary policy. Further empirical analysis of the
New Keynesian Phillips curve provides an opportunity to illustrate their approach
in another context. In summary, Bårdsen and Nymoen note that cointegration

analysis takes into account non-stationarities that arise through unit roots, so that
forecast failures are unlikely to be attributable to misspecification for that reason.
In contrast to the econometric models of the 1970s, the real challenges arise from
non-stationarities in functional relationships due to structural breaks; however,


Terence C. Mills and Kerry Patterson

xxiii

there are ways to “robustify” the empirical model and forecasts from it so as to
mitigate such possibilities, although challenges remain in an area that continues
to be of central importance in economic policy.
One of the key developments in monetary policy in the UK and elsewhere in
the last decade or so has been the move to give central banks a semi-autonomous
status. In part, this was thought to avoid the endogenous “stop–go” cycle driven by
political considerations. It also carried with it the implication that it was monetary
policy, rather than fiscal policy, which would become the major macroeconomic
policy tool, notwithstanding the now apparent practical limitations of such a
move. In Chapter 18, Brian Henry provides an overview of the institutional and
theoretical developments in the UK in particular, but with implications for other
countries that have taken a similar route. The key question that is addressed in
this chapter is whether regime changes, such as those associated with labor market
reforms, inflation targeting and instrument independence for the Bank of England, have been the key factors in dampening the economic cycle and improving
inflation, unemployment and output growth, or whether the explanation is more
one of beneficial international events (the “good luck” hypothesis) and monetary
policy mistakes. Henry concludes, perhaps controversially, that the reforms to the
labor market and to the operation of the central bank are unlikely to have been the
fundamental reasons for the improvement in economic performance. He provides
an econometric basis for these conclusions, which incorporates a role for international factors such as real oil prices and measures of international competitiveness.

Once these factors are taken into account, the “regime change” explanation loses
force.
The growth of financial econometrics in the last two decades was noted in the
first volume of this Handbook. Indeed, this development was recognized in the
award of the 2003 Nobel Prize in Economics (jointly with Sir Clive Granger) to
Robert Engle for “methods of analyzing economic time series with time-varying
volatility (ARCH).” Part VII of this volume reflects this development and is thus
devoted to applications in the area of financial econometrics.
In Chapter 19, George Dotsis, Raphael Markellos and Terence Mills consider
continuous-time stochastic volatility models. What is stochastic volatility? To
answer that question, we start from what it is not. Consider a simple model of an
asset price, Y(t), such as geometric Brownian motion, which in continuous time
takes the form of the stochastic differential equation dY(t) = μY(t) + σ Y(t)dW(t),
2

where W(t) is a standard Brownian motion (BM) input; then σ (or σ ) is the volatility parameter that scales the stochastic BM contribution to the diffusion of Y(t).
In this case the volatility parameter is constant, although the differential equation
is stochastic. However, as Dotsis et al. note, a more appropriate specification for
the accepted characteristics of financial markets is a model in which volatility
also evolves stochastically over time. For example, if we introduce the variance

function v(t), then the simple model becomes dY(t) = μY(t) + v(t)Y(t)dW(t),
and this embodies stochastic volatility. Quite naturally, one can then couple this
equation with one that models the diffusion over time of the variance function.
ARCH/GARCH models are one way to model time-varying volatility, but there are


xxiv Editors’ Introduction

a number of other attractive specifications; for example, jump diffusions, affine

diffusions, affine jump diffusions and non-affine diffusions. In motivating alternative specifications, Dotsis et al. note some key empirical characteristics in financial
markets that underlie the rationale for stochastic volatility models, namely fat
tails, volatility clustering, leverage effects, information arrivals, volatility dynamics and implied volatility. The chapter then continues by covering such issues as
specification, estimation and inference in stochastic volatility models. A comparative evaluation of five models applied to the S&P 500, for daily data over the
period 1990–2007, is provided to enable the reader to see some of the models “in
action.”
One of the most significant ideas in the area of financial econometrics is that the
underlying stochastic process for an asset price is a martingale. Consider a stochastic process X = (Xt , Xt−1 , . . .), which is a sequence of random variables; then the
martingale property is that the expectation (at time t − 1) of Xt , conditional on the
information set It−1 = (Xt−1 , Xt−2 , . . .), is Xt−1 ; that is, E(Xt |It−1 ) = Xt−1 (almost
surely), in which case, X is said to be a martingale (the definition is sometimes
phrased in terms of the σ -field generated by It−1 , or indeed some other “filtration”). Next, define the related process Y = ( Xt , Xt−1 , . . .); then Y is said to be a
martingale difference sequence (MDS). The martingale property for X translates to
the property for Y that E(Yt |It−1 ) = 0 (see, for example, Mikosch, 1998, sec. 1.5).
This martingale property is attractive from an economic perspective because of its
link to efficient markets and rational expectations; for example, in terms of X, the
martingale property says that the best predictor, in a minimum mean squared error
(MSE) sense, of Xt is Xt−1 .
In Chapter 20, J. Carlos Escanciano and Ignacio Lobato consider tests of the
martingale difference hypothesis (MDH). The MDH generalizes the MDS condition
to E(Yt |It−1 ) = μ, where μ is not necessarily zero; it implies that past and current
information (as defined in It ) are of no value, in an MSE sense, in forecasting future
values of Yt . Tests of the MDH can be seen as being translated to the equivalent
form given by E[(Yt − μ)w(It−1 )], where w(It−1 ) is a weighting function. A useful
means of organizing the extant tests of the MDH is in terms of the type of functions
w(.) that are used. For example, if w(It−1 ) = Yt−j , j ≥ 1, then the resulting MDH
test is of E[(Yt − μ)Yt−j ] = 0, which is just the covariance between Yt and Yt−j .
This is just one of a number of tests, but it serves to highlight some generic issues.
In principle, the condition should hold for all j ≥ 1 but, practically, j has to be
truncated to some finite value. Moreover, this is just one choice of w(It−1 ), whereas

the MDH condition is not so restricted. Escanciano and Lobato consider issues such
as the nature of the conditioning set (finite or infinite), robustifying standard test
statistics (for example, the Ljung–Box and Box–Pierce statistics), and developing
tests in both the time and frequency domains; whilst standard tests are usually
of linear dependence, for example autocorrelation based tests, it is important to
consider tests based on nonlinear dependence. To put the various tests into context,
the chapter includes an application to four daily and weekly exchange rates against
the US dollar. The background to this is that the jury is out in terms of a judgment
on the validity of the MDH for such data; some studies have found against the


×