Kaiji Motegi
Kobe University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kaiji Motegi.
Archive | 2017
Eric Ghysels; Jonathan B. Hill; Kaiji Motegi
This paper proposes a new test for a large set of zero restrictions in regression models based on a seemingly overlooked, but simple, dimension reduction technique. The procedure involves multiple parsimonious regression models where key regressors are split across simple regressions. Each parsimonious regression model has one key regressor and other regressors not associated with the null hypothesis. The test is based on the maximum of the squared parameters of the key regressors. Parsimony ensures sharper estimates and therefore improves power in small sample. We present the general theory of our test and focus on mixed frequency Granger causality as a prominent application involving many zero restrictions.This paper presents simple Granger causality tests applicable to any mixed frequency sampling data setting, which feature remarkable power properties even with a relatively small sample size. Our tests are based on a seemingly overlooked, but simple, dimension reduction technique for regression models. If the number of parameters of interest is large then in small or even large samples any of the trilogy test statistics may not be well approximated by their asymptotic distribution. A bootstrap method can be employed to improve empirical test size, but this generally results in a loss of power. A shrinkage estimator can be employed, including Lasso, Adaptive Lasso, or Ridge Regression, but these are valid only under a sparsity assumption which does not apply to Granger causality tests. The procedure, which is of general interest when testing potentially large sets of parameter restrictions, involves multiple parsimonious regression models where each model regresses a low frequency variable onto only one individual lag or lead of a high frequency series, where that lag or lead slope parameter is necessarily zero under the null hypothesis of non-causality. Our test is then based on a max test statistic that selects the largest squared estimator among all parsimonious regression models. Parsimony ensures sharper estimates and therefore improved power in small samples. We show via Monte Carlo simulations that the max test is particularly powerful for causality with a large time lag.
arXiv: Methodology | 2016
Jonathan B. Hill; Kaiji Motegi
This paper presents bootstrapped p-value white noise tests based on the max-correlation, for a time series that may be weakly dependent under the null hypothesis. The time series may be prefiltered residuals. Our test statistic is a scaled maximum sample correlation coefficient where the maximum lag increases at a rate slower than the sample size. We only require uncorrelatedness under the null hypothesis, along with a moment contraction dependence property that includes mixing and non-mixing sequences, and exploit two wild bootstrap methods for p-value computation. We operate either on a first order expansion of the sample correlation, or Delgado and Velascos (2011) orthogonalized correlation for fixed lag length, both to control for the impact of residual estimation. A numerical study shows the first order expansion is superior, especially for large lag length. When the filter involves a GARCH model then the orthogonalization breaks down, while the first order expansion works quite well. We show Shaos (2011) dependent wild bootstrap is valid for a much larger class of processes than originally considered. Since only the most relevant sample serial correlation is exploited amongst a set of sample correlations that are consistent asymptotically, empirical size tends to be sharp and power is comparatively large for many time series processes. The test has non-trivial local power against local alternatives, and can detect very weak and distant serial dependence better than a variety of other tests. Finally, we prove that our bootstrapped p-value leads to a valid test without exploiting extreme value theoretic arguments, the standard in the literature.
The North American Journal of Economics and Finance | 2018
Kaiji Motegi; Akira Sadahiro
It is well known that sluggish private investment plagued the Japanese macroeconomy during the Lost Decade. Previous empirical papers have not reached a clear consensus on what caused the investment slowdown. This paper sheds new light on this issue by fitting a mixed frequency vector autoregressive model to monthly stock prices, quarterly bank loans, firm profit, and private investment. Monthly stock prices explain as much as 50.7% of the long-run forecast error variance of investment. We also reveal a spiral of declining stock prices, profit, and investment. Finally, the stagnation of bank loans is a consequence of declined stock prices, and the former is not a cause of declined investment.
Social Science Research Network | 2017
Jonathan B. Hill; Kaiji Motegi
Weak form efficiency of stock markets implies unpredictability of stock returns in a time series sense, and the latter is tested predominantly under a serial independence or martingale difference assumption. Since these properties rule out weak dependence that may exist in stock returns, it is of interest to test whether returns are white noise. We perform white noise tests assisted by Shaos (2011) blockwise wild bootstrap. We reveal that, in rolling windows, the block structure inscribes an artificial periodicity in bootstrapped confidence bands. We eliminate the periodicity by randomizing a block size. The white noise hypothesis is accepted for Chinese and Japanese markets, suggesting that those markets are weak form efficient. The white noise hypothesis is rejected for U.K. and U.S. markets during the Iraq War and the subprime mortgage crisis due to significantly negative autocorrelations, suggesting that those markets are inefficient in crisis periods.
Archive | 2015
Kaiji Motegi; Akira Sadahiro
Testing for the money illusion hypothesis in aggregate consumption function generally involves a regression model that projects real consumption onto nominal disposable income and a consumer price index. Price data are usually available at a monthly level, but consumption and income data are sampled at a quarterly level in some countries like Japan.This paper takes advantage of mixed data sampling (MIDAS) regressions in order to exploit monthly price data. We show via Monte Carlo simulations that our approach yields deeper economic insights and higher statistical precision than the previous single-frequency approach that aggregates price data into a quarterly level. In particular, the MIDAS approach allows for heterogeneous effects of monthly prices on real consumption within each quarter. In empirical applications we find that the heterogeneous effects indeed exist in Japan.
Journal of Econometrics | 2016
Eric Ghysels; Jonathan B. Hill; Kaiji Motegi
Archive | 2015
Eric Ghysels; Jonathan B. Hill; Kaiji Motegi
Archive | 2016
Jonathan B. Hill; Kaiji Motegi
arxiv:econ.EM | 2018
Chunrong Ai; Oliver Linton; Kaiji Motegi; Zheng Zhang
Archive | 2018
Shigeyuki Hamori; Kaiji Motegi; Zheng Zhang