Andrew C. Chang
Federal Reserve System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew C. Chang.
Social Science Research Network | 2015
Andrew C. Chang; Phillip Li
We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files. Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.
Journal of Economics and Business | 2016
Andrew C. Chang; Tyler J. Hanson
We analyze forecasts of consumption, nonresidential investment, residential investment, government spending, exports, imports, inventories, gross domestic product, inflation, and unemployment prepared by the staff of the Board of Governors of the Federal Reserve System for meetings of the Federal Open Market Committee from 1997 to 2008, called the Greenbooks. We compare the root mean squared error, mean absolute error, and the proportion of directional errors of Greenbook forecasts of these macroeconomic indicators with the errors from three forecasting benchmarks: a random walk, a first-order autoregressive model, and a Bayesian model averaged forecast from a suite of univariate time-series models commonly taught to first-year economics graduate students. We estimate our forecasting benchmarks both on end-of-sample vintage and real-time vintage data. We find that Greenbook forecasts significantly outperform our benchmark forecasts for horizons less than one quarter ahead. However, by the one-year forecast horizon, typically at least one of our forecasting benchmarks performs as well as Greenbook forecasts. Greenbook forecasts of personal consumption expenditures and unemployment tend to do relatively well, while Greenbook forecasts of inventory investment, government expenditures, and inflation tend to do poorly.
Social Science Research Network | 2015
Phillip Li; Andrew C. Chang
We analyze the effect of measurement error in macroeconomic data on economics research using two features of the estimates of latent US output produced by the Bureau of Economic Analysis (BEA). First, we use the fact that the BEA publishes two theoretically identical estimates of latent US output that only differ due to measurement error: the more well-known gross domestic product (GDP), which the BEA constructs using expenditure data, and gross domestic income (GDI), which the BEA constructs using income data. Second, we use BEA revisions to previously published releases of GDP and GDI. Using a sample of 23 published economics papers from top economics journals that utilize GDP as a key component of an estimated model, we assess whether using either revised GDP or GDI instead of GDP in the published paper would change reported results. We find that estimating models using revised GDP generates the same qualitative result as the original paper in all 23 cases. Estimatin g models using GDI, both with the GDI data originally available to the authors and with revised GDI, instead of GDP generates larger differences in results than those obtained with revised GDP. For 3 of 23 papers (13%), the results we obtain with GDI are qualitatively different than the original published results.
Social Science Research Network | 2018
Andrew C. Chang
What is the policy uncertainty surrounding expiring taxes? How uncertain are the approvals of routine extensions of temporary tax policies? To answer these questions, I use event studies to measure cumulative abnormal returns (CARs) for firms that claimed the U.S. research and development (R&D) tax credit from 1996-2015. In 1996, the U.S. R&D tax credit was statutorily temporary but was routinely extended ten times until 2015, when it was made permanent. I take the event dates as both when these ten extensions of the R&D tax credit were introduced into committee and when the extensions were signed by the U.S. president into law. On average, I find no statistically significant CARs on these dates, which suggests that the market anticipated these extensions to become law. My results support the fact that a routine extension of a temporary tax policy is not a generator of policy uncertainty and, therefore, that a routine extension of temporary tax policy is not a fiscal shock.
Economic Inquiry | 2018
Andrew C. Chang; Phillip Li
We use a preanalysis plan to analyze the effect of measurement error on economics research using the fact that the Bureau of Economic Analysis both revises its gross domestic product (GDP) data and also publishes a second, theoretically identical estimate of U.S. output that only differs from GDP due to measurement error: gross domestic income (GDI). Using a sample of 23 models published in top economics journals, we find that reestimating models using revised GDP always gives the same qualitative result as the original publication. Estimating models using GDI instead of GDP gives a different qualitative result for three of 23 models (13%).
Critical Finance Review | 2018
Andrew C. Chang; Phillip Li
Is Economics Research Replicable? Sixty Published Papers From Thirteen Journals Say “Often Not”
Applied Economics | 2016
Andrew C. Chang
This paper examines the effect of increased market concentration of the banking industry caused by the Riegle-Neal Interstate Banking and Branching Efficiency Act (IBBEA) on the availability of finance for small firms engaged in research and development (R&D). I measure the financing decisions of these small firms using a balanced panel of Small Business Innovation Research (SBIR) applications. Using difference-in-differences, I find IBBEA decreased the supply of finance for small R&D firms. This effect is larger for late adopters of IBBEA, which tended to be states with stronger small banking sectors pre-IBBEA.
The American Economic Review | 2017
Andrew C. Chang; Phillip Li
Social Science Research Network | 2014
Andrew C. Chang
Social Science Research Network | 2018
Andrew C. Chang