Performance Evaluation and Attribution

Theory and Practical Application

The University of Vienna

October 2002

Professor Russ Wermers July 31, 2002

Ó 2002 R. Wermers Subject to revisions

Course Objectives:

1. To provide an in-depth introduction to different approaches to performance evaluation and attribution, with

an emphasis on currently used state-of-the-art methods

2. To provide insight into the relation between asset pricing research and performance evaluation and attribution

research

3. To emphasize equity portfolio manager ranking and selection methods, with coverage of global equity

portfolio techniques, as well as to introduce fixed-income portfolio and alternative investment portfolio

performance evaluation techniques

4. To provide research skills, at the doctoral level, that are useful for accessing the current literature in

performance evaluation and attribution

5. To provide practical skills that can be applied to actual security portfolios of interest

Course Content:

· Basics of Performance Evaluation: How to choose the performance measurement model. How to choose the proper benchmark. How to interpret measured performance with an arbitrary benchmark or model. How to interpret measured performance in light of the current controversy over asset pricing models. Biases that are introduced to a dataset when we “snoop” it beforehand, and how to adjust for these biases.

· Technical Issues in Computing Performance: Potential biases with different return averaging methods, and how to avoid these biases. The effect of the bid-ask spread on return computations. The effect of non-synchronous trading on return computations. Potential biases with long-horizon abnormal return computations, and how to adjust for these biases.

· Basic Performance Evaluation Models: The Jensen alpha. The Sharpe ratio. Tracking error. The Treynor ratio. The Treynor-Black appraisal ratio. An APT-based performance measure. The Treynor-Mazuy model of timing. The pros and cons of each of these methods. Lab exercise.

· Advanced (Recent) Performance Evaluation Models: Unconditional models of performance: the Carhart model and the Daniel, Grinblatt, Titman, and Wermers model. Conditional models of performance: the Ferson and Schadt model and the Christopherson, Ferson, and Glassman model. Other recent models. The pros and cons of each of these methods, and how to interpret the results. Lab exercise.

· Measuring Performance When You Have No Idea of the Proper Benchmark or Model: Using out-of-sample returns or portfolio weights as the benchmark. Other bootstrapping techniques.

· Measuring Market Timing: Problems with models that do not explicitly recognize market timing. Models that separate market timing from selectivity.

· Style-Based Return Attribution and Decomposing Mutual Fund Returns (The Wermers method): Style investing, and the attribution of style-based returns. Stock-picking talents in excess of style-based returns. Transaction-cost estimation.

· Evaluating the Performance of Fixed-Income Portfolios and Derivatives Portfolios: Factor models for fixed-income performance evaluation. Attribution methods used by professional evaluation services, including a discussion of Yieldbook, a widely used Salomon Brothers fixed-income evaluation product. Latest theories on performance evaluation of derivatives portfolios.

· Evaluating Global Portfolios with Currency Risk: A general approach.

Course Prerequisites:

Course participants should have a basic knowledge of portfolio theory and statistics, as well as an introductory level background in regression analysis and matrix algebra. Since the course uses Excel-based applications to illustrate several techniques, participants should be familiar with Excel.

Explanatory Note:

Papers in boldface are those that are central to a given discussion, and are included in the course packet. Other (non-boldface) papers are listed as well (but not included in the packet), for extended reading for the interested participant.
Detailed Course Plan:

1 Welcome and Introduction to the Course

· Overview of course

· Why is performance evaluation and attribution important?

· What are the basic issues of performance evaluation and attribution (luck vs. skill, benchmark

selection, model selection, market efficiency, etc.)

Readings:

1. Bernstein, Peter, “Measuring the Performance of Performance Measurement,” 1995, in

Performance Evaluation, Benchmarks, and Attribution Analysis, Association for Investment

Management and Research (AIMR).

2. Bailey, Jeffery, “Manager Universes: The Solution or the Problem?” 1995, in

Performance Evaluation, Benchmarks, and Attribution Analysis, Association for Investment

Management and Research (AIMR).

2 Asset Pricing Theory and Empirics

· The current state of asset pricing theory and empirical work in the U.S. and elsewhere, and how

this impacts performance evaluation (a quick overview)—for both equity and fixed-income

markets

Readings:

1.  “Dimensional Fund Advisors (1993)” (Addendum I only), Harvard Case.

2.  Fama, Eugene, and Kenneth French, 1993, “Common Risk Factors In The Returns On Stocks and Bonds,” Journal of Financial Economics, 33, pp. 3-56.

3.  Gruber, Martin, Edwin Elton, Deepak Agrawal, and Christopher Mann, 2001, “Explaining the Rate Spread on Corporate Bonds,” Journal of Finance, 56, pp. 247-277.

4.  Elton, Edwin, Martin Gruber, and Christopher Blake, 1995, “Fundamental Economic Variables, Expected Returns, and Bond Fund Performance,” Journal of Finance, 50, pp. 1229-1256.

5.  Schiereck, Dirk, Werner De Bondt, and Martin Weber, 1999, “Contrarian and Momentum Strategies in Germany,” Financial Analysts Journal, pp. 104-116.

6.  Griffin, John, “Are the Fama and French Factors Global or Country-Specific?” Review of Financial Studies, forthcoming.

7.  Heston, Steven L. and K. Geert Rouwenhorst, 1994, “Does Industrial Structure Explain The Benefits of International Diversification?” Journal of Financial Economics, 36, pp. 3-27.

8.  Cavaglia, Stefano, Christopher Brightman, and Michael Aked, 2000, “The Increasing Importance of Industry Factors,” Financial Analysts Journal, September/October.

9.  Roll, Richard, 1992, “Industrial Structure And The Comparative Behavior Of International Stock Market Indexes,” Journal of Finance, 47, pp. 3-42.

3 Basic Performance Evaluation Models

· Non-regression-based approaches: Tracking error, Sharpe ratio

· Regression-based approaches: Treynor ratio, Treynor-Black Information Ratio

· Regression models: Jensen model, Carhart model, Treynor-Mazuy model, etc.

· Computer Exercise: Computing and comparing some basic performance evaluation models

Readings:

1.  Grinblatt, Mark and Sheridan Titman, 1995, “Performance Evaluation,” in Handbooks in Operations Research and Management Science, Chapter 19 (Finance), edited by R.A. Jarrow, V. Maksimovic, and W.T. Ziemba.

2.  Dietz, Peter and Jeannette Kirschman, 1990, “Evaluating Portfolio Performance,” in Managing Investment Portfolios (Chapter 14), AIMR publication.

4 Tracking-Error and Information Ratio Approaches to Performance Evaluation

· What is wrong with tracking-error approaches to performance evaluation?

Readings:

1.  Roll, Richard, 1992, “A Mean/Variance Analysis of Tracking Error,” Journal of Portfolio Management, 18, pp. 13-23.

2.  Blitz, David C. and Andiouke Hottinga, 2001, “Tracking Error Allocation,” Journal of Portfolio Management, 27, pp. 19-26.

3.  Goodwin, Thomas, 1998, “The Information Ratio,” Financial Analysts Journal, July/August 1998, pp. 35-43.

4.  Pope, Peter F. and Pradeep K. Yadav, 1994, “Discovering Errors In Tracking Error,” Journal of Portfolio Management, 20, pp. 27-32.

5.  Ammann, Manuel, and Heinz Zimmermann, 2001, “Tracking Error and Tactical Asset Allocation,” Financial Analysts Journal, March/April, pp. 32-43.

5 Measuring Market-Timing Ability—Various Approaches

Readings:

1.  Treynor, Jack and K. Mazuy, 1966, “Can mutual funds outguess the market?” Harvard Business Review, 44, pp. 131-36.

2.  Daniel, Naveen, 2002, “Do Specification Errors Affect Inferences on Portfolio Performance? Evidence from Monte Carlo Simulations,” Working Paper.

3.  Becker, Connie, Wayne Ferson, David H. Myers and Michael J. Schill, 1999, “Conditional Market Timing With Benchmark Investors,” Journal of Financial Economics, 52, pp. 119-148.

4.  Jagannathan, Ravi and Robert A. Korajczyk, 1986, “Assessing The Market Timing Performance Of Managed Portfolios,” Journal of Business, 59, pp. 217-236.

5.  Bauer, Richard and Julie Dahlquist, 2001, “Market Timing and Roulette Wheels,” Financial Analysts Journal, January/February, pp. 28-40.

6.  Blake, David, Bruce Lehmann, and Allan Timmermann, 1999, “Asset Allocation Dynamics and Pension Fund Performance,” Journal of Business, 72, pp. 429-461.

7.  Henriksson, Roy D. and Robert C. Merton, 1981, “On Market Timing And Investment Performance. II. Statistical Procedures For Evaluating Forecasting Skills,” Journal of Business, 54, pp. 513-534.

6 Potential Problems With Computing Returns and Abnormal Returns, and How to Avoid Them

· Different methods of computing “average” returns, and what can go wrong

· Potential problems with long-term benchmarks

Readings:

1. Blume, Marshall and Robert Stambaugh, 1983, “Biases In Computed Returns: An Application To The Size Effect,” Journal of Financial Economics, 12, 387-404.

2. Roll, Richard, 1983, “On Computing Mean Returns And The Small Firm Premium,” Journal of Financial Economics, 12, 371-386.

3. Barber, Brad and John Lyon, 1996, “Detecting Long-Run Abnormal Stock Returns: The

Empirical Power and Specification of Test Statistics,” Journal of Financial Economics 43, 341-

372.

4. Lyon, John D., Brad M. Barber and Chih-Ling Tsai, 1999, "Improved Methods For Tests Of

Long-Run Abnormal Stock Returns," Journal of Finance, 54, 165-201.

5. Roll, Richard, 1983, “Vas Ist Das?,” Journal of Portfolio Management, 9, 18-28.

6. Shumway, Tyler, 1997, “The Delisting Bias in CRSP Data,” Journal of Finance, 52, 327-

340.

7. Scholes, Myron and Joseph Williams, 1977, “Estimating Betas From Nonsynchronous Data,” Journal of Financial Economics, 5, 309-327.

8. Cheng, Pao and M. King Deets, 1971, “Statistical Biases And Security Rates Of Return,” Journal

of Financial and Quantitative Analysis, 6, 977-994.

9 Kothari, S. P., and Gerald Warner, 1996, “Measuring Long-Horizon Security Price Performance,”

Journal of Financial Economics 43, 301-339.

10. Barber, Brad and John Lyon, 1996, “How Can Long-Run Abnormal Stock Returns be Both

Positively and Negatively Biased?” UC-Davis Working Paper.

7 Deeper Philosophies of Performance Evaluation

· Roll’s “ambiguity paper”

· Mayer’s and Rice reply to Roll

· Roll’s reply to Mayer’s and Rice’s reply

· Cornell’s thoughts

Readings:

1. Roll, Richard, 1978, “Ambiguity When Performance is Measured by the Securities Market

Line,” Journal of Finance, 33, pp. 1051-1069.

2. Mayers, David and Edward Rice, 1979, “Measuring Portfolio Performance and the Empirical

Content of Asset Pricing Models,” Journal of Financial Economics, 7, pp. 3-29.

3. Roll, Richard, 1979, “A Reply to Mayers and Rice,” Journal of Financial Economics, 7,

pp. 391-400.

4. Cornell, Bradford, 1979, “Asymmetric Information and Portfolio Performance

Measurement,” Journal of Financial Economics, 7, pp. 381-390.

5. Lo, Andrew and Craig MacKinlay, 1990, “Data-Snooping Biases In Tests Of Financial Asset

Pricing Models,” Review of Financial Studies, 3, pp. 431-468.

8 Recent Performance Evaluation and Attribution Techniques for Stock Selectivity, Style

Timing, and Trading Costs

· Measuring selectivity with only returns information (the Fama-French and Carhart

methods)

· Measuring selectivity with portfolio holdings information (the Daniel, Grinblatt,

Titman, and Wermers method)

· Attribution analysis with the Wermers (2000) method

Readings:

1.  Wermers, Russ, 2000, “Mutual Fund Performance: An Empirical Decomposition into Stock-Picking Talent, Style, Transactions Costs, and Expenses,” The Journal of Finance, 55, pp. 1655-1695.

2.  Carhart, Mark, 1997, “On Persistence in Mutual Fund Performance,” Journal of Finance, 52, pp. 57-82.

3.  Kothari, S.P., and Jerold Warner, 2001, “Evaluating Mutual Fund Performance,” The Journal of Finance, 56, pp. 1985-2010.

4.  Daniel, Kent, Mark Grinblatt, Sheridan Titman, and Russ Wermers, 1997, “Measuring Mutual Fund Performance with Characteristic-Based Benchmarks,” Journal of Finance, 52, pp. 1035-1058.

5.  Keim, Donald, and Ananth Madhavan, 1998, “The Cost of Institutional Equity Trades,” Financial Analysts Journal, July/August.

5. Wermers, Russ, 2001, “Style Drift,” Working Paper.

6. DeRoon, Frans, Theo Nijman, and Jenke TerHorst, 2000, “Evaluating Style Analysis,”

Working Paper.

9 The Positive-Period Weighting Measure as a Class of Performance Measures

Readings:

1.  Grinblatt, Mark, and Sheridan Titman, 1989, “Portfolio Performance Evaluation: Old Issues And New Insights,” Review of Financial Studies, 2, pp. 393-422.

10 What To Do When You Have No Idea of the Proper Model or Benchmark (Or, the Model

and Benchmark are Not Available): Performance Evaluation Using Bootstrapped

Benchmarks (Weights vs. Returns)

· The Copeland and Mayers technique

· The Grinblatt and Titman technique

Readings:

1.  Copeland, Thomas E. and David Mayers, 1982, “The ValueLine Enigma (1965-1978): A Case Study of Performance Evaluation Issues,” Journal of Financial Economics, 10, 289-322.

2.  Grinblatt, Mark, and Sheridan Titman (1993), “Performance Measurement without Benchmarks: An Examination of Mutual Fund Returns,” Journal of Business, 66, 47-68.

3. Cornell, Bradford, 1979, “Asymmetric Information and Portfolio Performance

Measurement,” Journal of Financial Economics, 7, 381-390.

11 Recent Conditional vs. Unconditional Models of Performance (And Why It Makes a Difference!)

· Conditional alpha models, conditional alpha and beta models

Readings:

1. Ferson, Wayne, Jon Christopherson, and Andrew Turner, 1999, “Performance Evaluation

Using Conditional Alphas and Betas,” The Journal of Portfolio Management, Fall.

12 Measuring Alternative Investment Performance: Options, Futures, Hedge Fund Performance, and Other Non-Normal Return Portfolios—Various Approaches

· Dealing with non-normal asset returns as well as dynamic strategies

Readings:

1. Leland, Hayne, 1999, “Beyond Mean-Variance: Performance Measurement in a

Nonsymmetrical World,” Financial Analysts Journal, January/February.

2. Rubenstein, Mark, 2001, “Derivatives Performance Attribution,” Journal of Financial and

Quantitative Analysis, March.

3. Brown, Stephen, and William N. Goetzmann, 2001, “Hedge Funds with Style,” Working

Paper.

4. Agarwal, Vikas, and Narayan Naik, 2000, “Performance Evaluation of Hedge Funds with

Option-Based and Buy-and-Hold Strategies,” Working Paper.

5. Fung, William, and David Hsieh, 2000, “Performance Characteristics of Hedge Funds and Commodity

Funds: Natural vs. Spurious Biases,” Journal of Financial and Quantitative Analysis, September.

6. Kraus, Alan, and Robert Litzenberger, 1976, “Skewness Preference and the Valuation of Risky

Assets,”Journal of Finance 31, 1085-1100.

13 Factor-Based Regression Models of Fixed-Income Portfolio Performance

· Factors that tend to be important in explaining the cross-section of bond returns

· Results with U.S. bond funds.

· Computer Exercise: Computing and comparing some basic performance evaluation

models for fixed-income portfolios

Readings:

1.  Blake, Christopher R., Edwin J. Elton and Martin J. Gruber, “The Performance Of Bond

Mutual Funds,” Journal of Business, 1993, 66, pp. 371-403.

2.  Elton, Edwin, Martin Gruber, and Christopher Blake, 1995, “Fundamental Economic Variables, Expected Returns, and Bond Fund Performance,” Journal of Finance, 50, pp. 1229-1256 (see Session 1.2 for this paper).