Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Thorley is active.

Publication


Featured researches published by Steven Thorley.


The Journal of Portfolio Management | 2006

Minimum-Variance Portfolios in the U.S. Equity Market

Roger G. Clarke; Harindra de Silva; Steven Thorley

In the minimum-variance portfolio, far to the left on the efficient frontier, security weights are independent of expected security returns. Portfolios can be constructed using only the estimated security covariance matrix, without reference to equilibrium expected or actively forecasted returns. Empirical results illustrate the practical value of large-scale numerical optimizations using return-based covariance matrix estimation methodologies, providing new perspective on the factor characteristics of low-volatility portfolios. Optimizations that go back to 1968 reveal that the long-only minimum-variance portfolio has about three-fourths the realized risk of the capitalization-weighted market portfolio, with higher average returns.


Journal of Financial and Quantitative Analysis | 1994

Bubbles, Stock Returns, and Duration Dependence

Grant Richard McQueen; Steven Thorley

A new testable implication is derived from the rational speculative bubbles model stating that the presence of bubbles implies positive duration dependence in runs of high returns. Specifically, the probability of observing an end to a run of high returns declines with the length of the run. Traditional duration dependence tests are adapted for use with discrete stock runs data and, consistent with the existence of bubbles, evidence of duration dependence in monthly real stock returns is found.


Journal of Monetary Economics | 1993

Asymmetric business cycle turning points

Grant Richard McQueen; Steven Thorley

Abstract This paper presents evidence that business cycles are characterized by ‘sharp’ troughs and ‘round’ peaks. Changes in growth rates surrounding NBER troughs are found to be larger than changes surrounding peaks, and the probability of a direct contraction to recovery transition is found to be higher than the probability of a direct recovery to contraction transition. These findings suggest caution in interpreting empirical tests of economic series that assume symmetry and motivate theoretic models in which additions to capacity, production, and employment at the end of a recession are not mirror images of the cutbacks at the end of an expansion.


Pacific-basin Finance Journal | 1998

Are there rational speculative bubbles in Asian stock markets

Kalok Chan; Grant Richard McQueen; Steven Thorley

Abstract Six Asian stock markets (Hong Kong, Japan, Korea, Malaysia, Thailand and Taiwan) and the U.S. stock market are evaluated for evidence of rational speculative bubbles using two types of tests. First, the duration dependence and conditional skewness tests of McQueen and Thorley (1994) are used on the complete time series of returns. Second, explosiveness tests are applied to specific episodes of apparent bubbles. In general, the Asian stock returns exhibit some unusual characteristics, but these characteristics do not conform to the predictions of the rational speculative bubbles model.


Financial Analysts Journal | 2001

Return Dispersion and Active Management

Harindra de Silva; Steven G. Sapra; Steven Thorley

The cross-sectional variation of U.S. stock returns has been unusually high in the past few years. The wide dispersion in security returns has led to correspondingly wide dispersion in fund returns. For example, the cross-sectional standard deviation of returns on actively managed domestic equity mutual funds was 24 percent in 1999, compared with only 5 percent in 1996. We argue that the wide dispersion in fund performance is a natural result of increased security return dispersion and has little to do with changes in the informational efficiency of the market or the range of managerial talent. The dramatic increase in return dispersion warrants a reexamination of traditional methodologies for measuring fund performance that implicitly assume constant dispersion. We show how performance benchmarking can be extended to incorporate the information embedded in return dispersion, as well as the benchmark mean return, by correcting fund alphas with a period- and asset-class-specific measure of security return dispersion. The cross-sectional variation in U.S. stock returns jumped to unusually high levels in the fall of 1998. The spread in returns between individual stocks was wider in 1999 and 2000 than at any other point in modern market history. The return spread is best measured by dispersion—the cross-sectional standard deviation of individual security returns within an asset class. Dispersion can be thought of as the cross-sectional analog to volatility—the standard deviation of returns on a security or portfolio over time. Economic historians believe that periods of wide equity return dispersion are associated with structural shifts in the underlying economy resulting from political or technological disruptions. The fundamental restructuring of the economic order leads to large corporate revaluations, with some companies going up in value while others decline. A possible candidate for the current episode of equity market dispersion is a technological shift—the emergence of new information technologies and the perceived changes in corporate competitive advantages associated with their use. The recent increase in security return dispersion has important implications for active management. Portfolio theory predicts that wide security dispersion will translate into wide dispersion of fund returns. We document the accuracy of this prediction. We found a very high correspondence between individual-security return dispersion and fund return dispersion on a year-to-year basis. For example, not only was 1999 a year of unusually wide dispersion for security returns, but it was also a year in which the dispersion of returns of actively managed domestic equity mutual funds was at an all-time high—24 percent compared with the typical range of 5–10 percent. An appreciation for the correspondence between dispersion in security and fund returns can help reverse some common misconceptions about active management. For example, publicity about the recently large spread in fund returns can be misinterpreted as evidence of a larger variation in managerial talent. In fact, it is simply an artifact of wider-than-normal return dispersion in the security pool from which managers can choose. Misunderstanding the cause of increased cross-sectional variation in fund performance can also lead to the counterintuitive conclusion that market efficiency has suddenly decreased. We reiterate Sharpes logic that active management measured against marketwide benchmarks is a zero-sum game before costs and a negative-sum game after costs. The arithmetic of active management dictates that when the performance of all investor groups is properly accounted for, exactly half will outperform a total market index before costs. After research and transaction costs, fewer than half will outperform. Thus, the percentage of all actively managed funds that beat the market in any period is unrelated to market efficiency. Rather, it is determined by the magnitude of return dispersion around the mean and the costs of active management. We show that as return dispersion increases, the percentage of outperformers also increases. Perhaps the most important implication of intertemporal variation in return dispersion is in the area of individual-fund performance measurement. During a year with marketwide fund dispersion of 5 percent, a positive alpha (return in excess of the benchmark) of 10 percentage points is a significant achievement. In a year when fund dispersion is 20 percent, a 10 percentage point alpha means a lot less. Averaging alphas over time without consideration for intertemporal variations in dispersion can lead to a material misstatement of relative performance. We show how performance benchmarking can be extended to incorporate the information embedded in return dispersion, as well as information on the benchmark mean, by correcting fund alphas with a period- and asset-class-specific measure of security return dispersion. Weighting alpha observations by the inverse of return dispersion can be characterized as an econometric correction for heteroscedasticity. We argue that multiperiod performance statistics that correct for intertemporal variations in return dispersion are better indicators of managerial talent and may provide improved predictions of future added value. Return dispersion corrections are particularly relevant in the measurement of U.S. equity portfolio performance over the past several years.


The Journal of Portfolio Management | 2013

Risk Parity, Maximum Diversification, and Minimum Variance: An Analytic Perspective

Roger G. Clarke; Harindra de Silva; Steven Thorley

STEVEN THORLEY is the H. Taylor Peery Professor of Finance at Brigham Young University in Provo, UT. [email protected] Portfolio construction techniques based on predicted risk, without expected returns, have become popular in the last decade. In terms of individual asset selection, minimum-variance and (more recently) maximum diversification objective functions have been explored, motivated in part by the cross-sectional equity risk anomaly first documented in Ang, Hodrick, Xing, and Zhang [2006]. Application of these objective functions to large (e.g., 1,000 stock) investable sets requires sophisticated estimation techniques for the risk model. On the other end of the spectrum, the principal of risk parity, traditionally applied to small-set (e.g., 2 to 10) asset allocation decisions, has been proposed for large-set security selection applications. Unfortunately, most of the published research on these low-risk structures is based on standard unconstrained portfolio theory, matched with long-only simulations. The empirical results in such studies are specific to the investable set, time period, maximum weight constraints, and other portfolio limitations, as well as the risk model. This article compares and contrasts risk-based portfolio construction techniques using long-only analytic solutions. We also provide a simulation of risk-based portfolios for large-cap U.S. stocks, using the CRSP database from 1968 to 2012. We perform this back-test using a single-index model, standard OLS risk estimates, and no maximum position or other portfolio constraints, which leads to easily replicable results. The concept of risk parity has evolved over time from the original concept that Bridgewater embedded in research in the 1990s. Initially, an asset allocation portfolio was said to be in parity when weights are proportional to asset-class inverse volatility. For example, if the equity subportfolio has a forecasted volatility of 15 percent and the fixedincome subportfolio has a volatility of just 5 percent, then a combined portfolio of 75 percent fixed income and 25 percent equity (i.e., three times as much fixed-income) is said to be in parity. This early definition of risk parity ignored correlations, even as the concept was applied to more than two asset classes. Qian [2006] formalized a more complete definition that considers correlations, couching the property in terms of a risk budget where weights are adjusted so that each asset has the same contribution to portfolio risk. Maillard, Roncalli, and Teiletche [2010] call this an “equal risk contribution” portfolio, and analyzed properties of an unconstrained analytic solution. Lee’s [2011] equivalent “portfolio beta” interpretation says that risk parity is achieved when weights are proportional to the inverse of their beta, with respect to the final portfolio. Anderson, Bianchi, and Goldberg [2012] analyze the historical track record of risk parity in an asset allocation context, while this article focuses on analytic


The Journal of Portfolio Management | 2010

Know Your VMS Exposure

Roger G. Clarke; Harindra de Silva; Steven Thorley

One of the ongoing debates in equity market research is the set of common factors that explains the cross section of individual stock returns. With the influential backing of Fama and French [1993], a three-factor model that includes the market, size, and value factors is frequently cited in academic research and widely used in portfolio management. More recently, momentum has joined the list of accepted factors, resulting in references to a four-factor model. Lately, security volatility has begun to be used, along with the factors just mentioned, in describing portfolio risk. The authors introduce a specific measure of the idiosyncratic volatility factor that mirrors the Fama–French methodology, calling it VMS for volatile-minus-stable stocks. VMS is calculated for the entire span of the CRSP database and found to have strong credentials. VMS seems to be more important than SMB (small-minus-big market capitalization) and HML (high-minus-low book-to-market ratio), and similar to UMD (up-minus-down past return) in explaining the covariance structure of stock returns. The relative importance of VMS holds over the entire history for which it can be measured in the U.S. market (1931–2008) and continues to be an important factor in the covariance structure of stock returns in recent decades (1983–2008). Volatility, however, is not very orthogonal to the more well-known factors, a desirable property for new factors. Specifically, VMS is highly correlated to the general market (e.g., volatile stocks outperform stable stocks when the general equity market goes up) despite the fact that the authors measure security volatility in a market-idiosyncratic setting. VMS is also positively correlated with SMB (e.g., volatile stocks tend to outperform when small-cap stocks outperform) despite the Fama–French process of double sorting on market capitalization. Finally, VMS is negatively correlated with HML (e.g., volatile stocks tend to outperform when growth stocks outperform) although this correlation was not pronounced until the last few decades. In contrast to the other Fama–French factors, the average return of the VMS factor has been close to zero over time and negative in recent decades.


Communications in Statistics - Simulation and Computation | 2005

Estimating Hazard Functions for Discrete Lifetimes

Scott D. Grimshaw; James B. McDonald; Grant Richard McQueen; Steven Thorley

ABSTRACT Frequently in inference, the observed data are modeled as a sample from a continuous probability model, implying the observed data are precisely measured. Usually, the actual data available to the investigator are discrete–-either because they are rounded, meaning the exact measurement is within an interval defined by some small measurement unit related to the precision of the measuring device, or because the data are discrete, meaning the time periods until the event of interest are countable instead of continuous. This article is motivated by the common practice of testing for duration dependence (non constant hazard function) in economic and financial data using the continuous Weibull distribution when the data are discrete. A simulation study shows that biased parameter estimates and distorted hypothesis tests result when the degree of discretization is severe. When observations are rounded, as in measuring the time between stock trades, it is proper to treat them as interval-censored. When observations are discrete, as in measuring the length of stock runs, a discrete hazard function must be specified. Both cases are examined in simulation studies and demonstrated on financial data.


Financial Analysts Journal | 2005

Performance Attribution and the Fundamental Law

Roger G. Clarke; Steven Thorley

The reported study operationalized the “fundamental law of active management” in the context of a factor-based performance attribution system. The system incorporates factor payoffs in the linear regression framework that many portfolio managers and external reviewers use to judge what is being rewarded in the market. The study indicates that parameters of the fundamental law can be used to approximate and interpret the results of the regression-based performance attribution system. The procedure is illustrated by the use of security holdings, returns, and factor exposure data for two portfolios benchmarked to the S&P 500 Index for April 1995 to March 2004. The study reported here operationalized the “fundamental law of active management” by using a factor-based performance attribution system that identifies the sources of benchmark-relative returns in actively managed portfolios. Some of the relative return can be ascribed to marketwide factor exposures that differ from the benchmark, such as beta, company size, and company sector membership, and the realized payoffs to those factors. Relative performance not captured by these marketwide factors is generally attributed to security selection. In practice, the information content of the security-ranking system is often measured by an information coefficient or the performance of stocks grouped within quantile rankings, with little attempt to relate the success of the security-ranking system to its actual basis point contribution to performance. In this article, we show how a regression-based attribution system can be extended to decompose the active return associated with stock selection into the information content of the rankings and constraint-induced noise. The fundamental law of active management shows that, in addition to the forecasting power of the ranking system, performance is also influenced by how well the manager is able to structure the portfolio to capture the most attractive securities. The relationship between the security rankings and actual over- and underweight positions in the portfolio is measured by the transfer coefficient. A previous extension of the fundamental law demonstrated that the lower the transfer coefficient, the more noise in the active return. The procedures we discuss here allow the contribution from the security rankings to be separated from the noise component and give the manager insight into the determinants of portfolio performance. To illustrate the attribution procedure and test the accuracy of the fundamental law, we collected data on two portfolios benchmarked to the S&P 500 Index for the 108 months of April 1995 to March 2004. We examined performance attribution results for both a long-only portfolio and a long-short portfolio constructed on the basis of the same signal. The results illustrate the advantages in implementation efficiency of long-short strategies. Despite the simplifying assumptions used in the fundamental law mathematics, our estimates of signal and noise contributions were within a basis point per month of the contributions from regression analysis. We next used the 108 monthly time-series observations to test two key predictions of the fundamental law: an ex ante or expectational relationship for the information ratio and an ex post relationship describing the sources of realized variance in active returns. The fundamental law yields predictions about the expected value and variance of active returns under the assumption of fixed parameter values. Thus, the perfect empirical test of the fundamental law predictions requires repeated observations of the same month (or a time series without any structural changes in the market). In practice, covariance matrices and the underlying effectiveness of security-ranking procedures change over time, so our nine years of monthly observations provided only a rough check on the fundamental law predictions. Nevertheless, using the time-series averages as proxies for fixed parameter values, we found that the average information ratio in our sample is reasonably close to the value predicted by using the ex ante fundamental law equation with a transfer coefficient. In addition, the proportions of realized performance variance attributable to signal success and to constraint-induced noise are related to the squared transfer coefficient but with a bias toward more signal contribution than the ex post fundamental law equation predicts. Our subperiod analysis suggests that this bias results from nonstationarities inherent in real markets over time.


The Journal of Portfolio Management | 2017

Pure Factor Portfolios and Multivariate Regression Analysis

Roger G. Clarke; Harindra de Silva; Steven Thorley

Linking factor portfolio construction to cross-sectional regressions of security returns on standardized factor exposures leads to a transparent and investable perspective on factor performance. Under capitalization weighting, multivariate regression coefficients translate to portfolio returns that are benchmark relative and cleared of secondary factor exposures. The methodological contributions in this article are illustrated using a 50-year data set of 1,000 large U.S. stocks and five factor exposures: value, momentum, small size, low beta, and profitability. Using two case studies in factor portfolio analysis, the authors focus on cheapness, as measured by earnings yield, and interest rate risk, as measured by sensitivity to the 10-year Treasury bond return.

Collaboration


Dive into the Steven Thorley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven G. Sapra

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Vorkink

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge