Predictive Methodology and Application in Economics and Finance

Conference Presenters and Abstracts

______

Ted Anderson ()

Department of Statistics, Stanford University

Reduced Rank Regression for Blocks of Simultaneous Equations

______

Richard Carson ()

Department of Economics, University of California, San Diego

Air Travel Passenger Demand Forecasting

Abstract
Transportation modelers have traditionally used a four-stage model of travel demand: (1) generation of the total number of trips, (2) distribution of these trips between origins and destinations (O-D), (3) the choice of the mode of travel for each trip, and (4) the choice of route for each trip. Linkage between these steps has largely been ad hoc and “equilibrium” solutions have generally been achieved by “recalibration” of aberrant estimates. This paper proposes a unified estimation framework in the context of airline passenger demand and draws heavily on the recently industrial organization literature (e.g., Berry, Levinsohn, and Pakes, 1995) that deals with trying to sort out the endogeneity of price and product attributes. Stages (1) and (3) of the traditional four-stage model are effectively combined into a single stage which estimates the propensity for air travel taking the relevant population as known. Stage (4) is moved to the right hand side by assuming that passengers can only choose between itineraries offered by airlines and only care about the attributes of those itineraries such as airline and travel time. The framework put forth recognizes: (a) that a key component of the trip generation process from a particular origin is identified only in a panel data context, (b) that the attractiveness of flight options from an origin should influence the number of trips in a well-defined utility sense that ties the first two stages together, and (c) that it is possible with a richer specification of the cost component to estimate a (latent) airline specific component, and (d) that introducing a richer set of “attractor” variables and lagged O-D proportions as predictors can start to help explain the evolution of the (usually assumed to be static) origin-destination matrix over time. The paper concludes by discussing how the framework put forth can be empirically implemented by using the long standing U.S. Department of Transportation’s quarterly sample of 10% of all airline tickets augmented with available data sources.

______

Xiaohong Chen ()

Department of Economics, New York University

[joint with Yanqin Fan, Department of Economics, Vanderbilt University, Box 1819 Station B, Nashville, TN 37235-1819, ]

Estimation of A New Class of Semiparametric Copula-based Multivariate Dynamic Models

Abstract

Economic and financial multivariate time series are typically nonlinear, non-normally distributed, and have nonlinear co-movements beyond the first and second conditional moments. Granger (2002) points out that the classical linear univariate and linear co-movements of multivariate modelling (based on the Gaussian distribution assumption) clearly fail to explain the stylized facts observed in economic and financial time series and that it is highly undesirable to perform various economic policy evaluations, financial forecasts, and risk managements based on the classical conditional (or unconditional) Gaussian modelling. The knowledge of the multivariate conditional distribution (especially the fat-tailedness, asymmetry, positive or negative dependence) is essential in many important financial applications, including portfolio selection, option pricing, asset pricing models, Value-at-Risk (market risk, credit risk, liquidity risk) calculations and forecasting. Thus the entire conditional distributions of multivariate nonlinear economic (financial) time series should be studied, see Granger (2002). One obvious solution is to estimate the multivariate probability density fully nonparametrically. However, it is known that the accuracy and the convergence rate of nonparametric density estimates deteriorate fast as the number of series (dimension) increases. The "curse of dimensionality" problem gets even worse for economic and financial multiple time series, as they often move together, leading to the sparse data problem where there is plenty data in some regions but little data in other regions within the support of the distribution. Also, economic and financial multivariate time series typically have time-varying conditional first and second moments, which makes it hard to statistically justify the nonparametric estimation of the conditional density of the observed series. Moreover, it is known that fully nonparametric modelling will lead to less accurate forecasts, risk management calculations and policy evaluations.

In this paper, we introduce a new broad class of semiparametric copula-based multivariate dynamic (hereafter SCOMDY) models, which allows for the estimation of multivariate conditional density semiparametrically. The new SCOMDY class of models specifies the multivariate conditional mean and conditional variance parametrically, but specifies the distribution of the (standardized) innovations semiparametrically as a parametric copula evaluated at nonparametric univariate marginals. By the Sklar's (1959) theorem, any multivariate distribution with continuous marginals can be uniquely decomposed into a copula function and its univariate marginals, where a copula function is simply a multivariate distribution function with uniform marginals. In our SCOMDY specification, the copula function captures the concurrent dependence between the components of the multivariate innovation, while the marginal distributions characterize the behavior of individual components of the innovation. Our semiparametric specification has important appealing features: First, the distribution of the multivariate innovation depends on nonparametric functions of only one dimension and hence achieves dimension reduction. This is particularly useful in high dimensions and in cases where individual time series tend to move together and hence are scarce in certain regions of the support. In the econometrics and statistics literature, great attention has been devoted to the development of semiparametric regression models that achieve dimension reduction. Well known examples include the partial linear model of Robinson (1988), the single index model of Ichimura (1993), and the additive model of Andrews and Whang (1990), to name only a few. In view of the importance of modelling the entire multivariate distribution in economic and financial applications, there is a great need for dimension reduction techniques in modelling the entire distribution; Second, the flexible specification of the innovation distribution via the separate specification of the copula and the univariate marginal distributions is particularly attractive in financial applications, as financial time series are known to exhibit stylized facts regarding their comovements (positive or negative dependence, etc.) and their marginal behaviors (skewness, kurtosis, etc.). These stylized facts can be easily incorporated into the semiparametric specification of the distribution of the innovations, as there exist a wide range of parametric copulas capturing different types of dependence structures, see Joe (1997) and Nelsen (1999); Third, the conditional mean and conditional variance can take any parametric specifications such as the multivariate ARCH, GARCH, stochastic volatility, Markov switching, and combinations of them with observed common factors, detrending, deseasonalizing etc., while the copula function can also take any parametric form such as time-varying, Markov switching, deseasonalizing etc.

Recently copulas have found great success in modelling the (nonlinear) dependence structure of different financial time series and in risk management. See Embrechts, et al. (1999) and Bouyé, et al. (2000) for reviews; Hull and White (1998), Cherubini and Luciano (2001) and Embrechts, et al. (2001) for the portfolio Value-at-Risk applications; Rosenberg (1999), and Cherubini and Luciano (2002) for multivariate option pricing via copulas; Li (2000), and Frey and McNeil (2001) for modelling correlated default and credit risk via copulas; Costinot, et al. (2000) and Hu (2002) for contagion via copulas; Patton (2002a, b, c), and Rockinger and Jondeau (2001) for the copula-based modelling of the time-varying conditional dependence between different financial series. In the probability literature, the copula approach has mainly been used to generate (or simulate) various multivariate distributions with given marginals; In the statistics literature, the copula method has been widely used in survival analysis to model nonlinear correlations, see e.g. Joe (1997), Nelson (1999), Clayton (1978) and Oakes (1982); The copula method has also been applied in micro-econometrics literature, see e.g. Lee (1982, 1983) and Heckman and Honore (1989).

Semiparametric copula-based multivariate models have been widely applied, however, their econometric and statistical properties are still lacking. In this paper, we first study the identification, ergordicity and other probabilistic properties of the SCOMDY class of models. We then propose a simple two-step estimation procedure and establish the properties of the estimators under correct or incorrect specification of copulas. Our results will make original contributions to the existing theoretical and empirical literature on semiparametric copula-based multivariate modellings and applications.

______

Valentina Corradi [joint with Norman R. Swanson] ()

Department of Economics, Exeter University

Predictive Density Evaluation In the Presence of Generic Misspecification

This paper outlines a procedure for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models. The procedure is closely related to Andrews' (1997) conditional Kolmogorov test and to White's (2000) reality check approach. Our approach is compared with methods from the extant literature, including methods based on the

probability integral transform and the Kulback Leibler Information Criterion. An appropriate bootstrap procedure for obtaining critical values in the context of predictions constructed using rolling and recursive estimation schemes is developed Monte Carlo experiment comparing the performance of rolling, recursive and fixed sample schemes for the bootstrapping methods developed in this paper shows that the coverage probabilities of the bootstrap are quite good, particulatly when compared with the analogous full sample block bootstrap. Finally, an empirical illustration is provided and the predictive density accuracy test is used to evaluate a small group of competing inflation models.

______

Frank Diebold ()

Departments of Economics, Finance and Statistics, University of Pennsylvania

[joint with Torben Andersen, Department of Finance, J.L. Kellogg Graduate School of Mgmt., Northwestern University, 2001 Sheridan Road, Evanston, IL 60208-2006, ,

Tim Bollerslev, Department of Economics, Social Science Building, Duke University, Durham, NC 27708-0097, ,Ms. Ginger Wu, Dept. of Econs., U. of Pa., 3718 Locust Walk, Phila., PA 19104-6297,

Realized Beta

Abstract:

A large literature over several decades reveals both extensive concern with the question of time-varying systematic risk and an emerging consensus that systematic risk is in fact time-varying, leading to the conditional CAPM and its associated time-varying betas. Set against that background, we assess the dynamics in realized betas, vis a vis the dynamics in the underlying realized market variance and individual equity covariances with the market. We use powerful new econometric theory that facilitates model-free yet highly efficient inference allowing for nonlinear long-memory common features, and the results are striking. Although realized variances and covariances are very highly persistent, realized betas, which are simple nonlinear functions of those realized variances and covariances, are much less persistent, and arguably constant. We conclude by drawing implications for asset pricing and portfolio management.

______

Jean-Marie Dufour ()

Department of Economics, Université de Montréal

[joint with Tarek Jouini, Université de Montréal]

Finite-sample simulation-based inference in VAR models with applications to order selection and causality testing

Abstract:

Statistical inference in vector autoregressive (VAR) models is typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with realistic sample sizes, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour(2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to a VAR model of the U.S. economy.

______

Eric Ghysels ()

Departments of Economics and Finance, University of North Carolina, Chapel Hill

[joint with Elena Andreou, Dept. of Economics, University of Cyprus, P.O. Box 537

CY 1678, Nicosia – Cyprus,

Monitoring and Forecasting Disruptions in Financial Markets

Abstract

Disruptions of financial markets are defined as change-points in the conditional distribution of asset returns that result into financial losses beyond those that can be anticipated by current risk management measures such as Expected Shortfalls or Value at Risk. The conditional distribution is monitored to establish stability in financial markets by sequentially testing for disruptions. Some recent examples of sequentially monitoring structural change in economics are Chu et al. (1996) and Leisch et al. (2000) which focus however on linear regression models. Our analysis considers strongly dependent financial time series and distributional change-point tests. Forecasting the probability of disruptions is pursued along two dimensions. The first dimension involves the Black-Scholes (BS) formula and the second the multivariate conditional information (CI) approach. In the univariate framework the BS formula in conjunction with de-volatilized returns forecasts the probability of a disruption within a given horizon (e.g. of 10 days). The following statistical results are used for the BS inputs. The empirical process convergence results that relate to transformations of the weighted sequential ranks and the (two parameter) sequential Empirical Distribution Function (EDF) of normalized returns yield Brownian motion and Brownian Bridge approximations, respectively (Bhattacharya and Frierson, 1981, Horvath et al., 2001). The power of this procedure is assessed (via simulations and empirical applications) with respect to different weighting schemes, warning lines, optimal stopping rules (and in particular speed of detection) as well as testing the validity of sequential probability forecasts (Seillier-Moiseiwitsch and Dawid, 1993). In addition, the local power asymptotic results show that it is more optimal to capitalize on high frequency (say hourly) returns and volatility filters (Andreou and Ghysels, 2002, 2003b). This has the advantage of multiplying the sample size by the intra-day frequency (as opposed to the daily sample) and thus increasing the accuracy of the forecast, as well as uncovering additional intra-day information that are considered statistically as early warning signs for disruptions and which are otherwise lost with aggregation or constitute forgone opportunities for hedging strategies. The absolute returns (Ding et al., 1993, Ding and Granger, 1996, Granger and Sin, 2000, Granger and Starica, 2001) and power variation filters of volatility (Barndorff-Nielsen and Shephard, 2003) enjoy power in detecting change-points in financial markets. In a multivariate framework the CI method is based on variables that can predict the conditional distribution of stock returns and can be used as leading indicators of a disruption. Conditioning variables involve volume, flight of foreign exchange, banking sector indicators etc. (see also Chen et al., 2001). The international comovements of financial markets are also captured and sequentially monitored in this framework (Andreou and Ghysels, 2003a). Leading indicators and warning lines with different probabilities are used to evaluate the probability of a disruption in a multivariate framework. The BS formula complements the multivariate forecasting framework by using residual-based EDF results.