Featured Researches

Econometrics

Awareness of crash risk improves Kelly strategies in simulated financial time series

We simulate a simplified version of the price process including bubbles and crashes proposed in Kreuser and Sornette (2018). The price process is defined as a geometric random walk combined with jumps modelled by separate, discrete distributions associated with positive (and negative) bubbles. The key ingredient of the model is to assume that the sizes of the jumps are proportional to the bubble size. Thus, the jumps tend to efficiently bring back excess bubble prices close to a normal or fundamental value (efficient crashes). This is different from existing processes studied that assume jumps that are independent of the mispricing. The present model is simplified compared to Kreuser and Sornette (2018) in that we ignore the possibility of a change of the probability of a crash as the price accelerates above the normal price. We study the behaviour of investment strategies that maximize the expected log of wealth (Kelly criterion) for the risky asset and a risk-free asset. We show that the method behaves similarly to Kelly on Geometric Brownian Motion in that it outperforms other methods in the long-term and it beats classical Kelly. As a primary source of outperformance, we determine knowledge about the presence of crashes, but interestingly find that knowledge of only the size, and not the time of occurrence, already provides a significant and robust edge. We then perform an error analysis to show that the method is robust with respect to variations in the parameters. The method is most sensitive to errors in the expected return.

Read more
Econometrics

Backward CUSUM for Testing and Monitoring Structural Change

It is well known that the conventional CUSUM test suffers from low power and large detection delay. We therefore propose two alternative detector statistics. The backward CUSUM detector sequentially cumulates the recursive residuals in reverse chronological order, whereas the stacked backward CUSUM detector considers a triangular array of backward cumulated residuals. While both the backward CUSUM detector and the stacked backward CUSUM detector are suitable for retrospective testing, only the stacked backward CUSUM detector can be monitored on-line. The limiting distributions of the maximum statistics under suitable sequences of alternatives are derived for retrospective testing and fixed endpoint monitoring. In the retrospective testing context, the local power of the tests is shown to be substantially higher than that for the conventional CUSUM test if a single break occurs after one third of the sample size. When applied to monitoring schemes, the detection delay of the stacked backward CUSUM is shown to be much shorter than that of the conventional monitoring CUSUM procedure. Moreover, an infinite horizon monitoring procedure and critical values are presented.

Read more
Econometrics

Bayesian Inference in High-Dimensional Time-varying Parameter Models using Integrated Rotated Gaussian Approximations

Researchers increasingly wish to estimate time-varying parameter (TVP) regressions which involve a large number of explanatory variables. Including prior information to mitigate over-parameterization concerns has led to many using Bayesian methods. However, Bayesian Markov Chain Monte Carlo (MCMC) methods can be very computationally demanding. In this paper, we develop computationally efficient Bayesian methods for estimating TVP models using an integrated rotated Gaussian approximation (IRGA). This exploits the fact that whereas constant coefficients on regressors are often important, most of the TVPs are often unimportant. Since Gaussian distributions are invariant to rotations we can split the the posterior into two parts: one involving the constant coefficients, the other involving the TVPs. Approximate methods are used on the latter and, conditional on these, the former are estimated with precision using MCMC methods. In empirical exercises involving artificial data and a large macroeconomic data set, we show the accuracy and computational benefits of IRGA methods.

Read more
Econometrics

Bayesian Panel Quantile Regression for Binary Outcomes with Correlated Random Effects: An Application on Crime Recidivism in Canada

This article develops a Bayesian approach for estimating panel quantile regression with binary outcomes in the presence of correlated random effects. We construct a working likelihood using an asymmetric Laplace (AL) error distribution and combine it with suitable prior distributions to obtain the complete joint posterior distribution. For posterior inference, we propose two Markov chain Monte Carlo (MCMC) algorithms but prefer the algorithm that exploits the blocking procedure to produce lower autocorrelation in the MCMC draws. We also explain how to use the MCMC draws to calculate the marginal effects, relative risk and odds ratio. The performance of our preferred algorithm is demonstrated in multiple simulation studies and shown to perform extremely well. Furthermore, we implement the proposed framework to study crime recidivism in Quebec, a Canadian Province, using a novel data from the administrative correctional files. Our results suggest that the recently implemented "tough-on-crime" policy of the Canadian government has been largely successful in reducing the probability of repeat offenses in the post-policy period. Besides, our results support existing findings on crime recidivism and offer new insights at various quantiles.

Read more
Econometrics

Bayesian analysis of seasonally cointegrated VAR model

The paper aims at developing the Bayesian seasonally cointegrated model for quarterly data. We propose the prior structure, derive the set of full conditional posterior distributions, and propose the sampling scheme. The identification of cointegrating spaces is obtained \emph{via} orthonormality restrictions imposed on vectors spanning them. In the case of annual frequency, the cointegrating vectors are complex, which should be taken into account when identifying them. The point estimation of the cointegrating spaces is also discussed. The presented methods are illustrated by a simulation experiment and are employed in the analysis of money and prices in the Polish economy.

Read more
Econometrics

Better Bunching, Nicer Notching

We study the bunching identification strategy for an elasticity parameter that summarizes agents' response to changes in slope (kink) or intercept (notch) of a schedule of incentives. A notch identifies the elasticity but a kink does not, when the distribution of agents is fully flexible. We propose new non-parametric and semi-parametric identification assumptions on the distribution of agents that are weaker than assumptions currently made in the literature. We revisit the original empirical application of the bunching estimator and find that our weaker identification assumptions result in meaningfully different estimates. We provide the Stata package "bunching" to implement our procedures.

Read more
Econometrics

Better Lee Bounds

This paper develops methods for tightening Lee (2009) bounds on average causal effects when the number of pre-randomization covariates is large, potentially exceeding the sample size. These Better Lee Bounds are guaranteed to be sharp when few of the covariates affect the selection and the outcome. If this sparsity assumption fails, the bounds remain valid. I propose inference methods that enable hypothesis testing in either case. My results rely on a weakened monotonicity assumption that only needs to hold conditional on covariates. I show that the unconditional monotonicity assumption that motivates traditional Lee bounds fails for the JobCorps training program. After imposing only conditional monotonicity, Better Lee Bounds are found to be much more informative than standard Lee bounds in a variety of settings.

Read more
Econometrics

Bias and Consistency in Three-way Gravity Models

We study the incidental parameter problem for the ``three-way'' Poisson {Pseudo-Maximum Likelihood} (``PPML'') estimator recently recommended for identifying the effects of trade policies and in other panel data gravity settings. Despite the number and variety of fixed effects involved, we confirm PPML is consistent for fixed T and we show it is in fact the only estimator among a wide range of PML gravity estimators that is generally consistent in this context when T is fixed. At the same time, asymptotic confidence intervals in fixed- T panels are not correctly centered at the true point estimates, and cluster-robust variance estimates used to construct standard errors are generally biased as well. We characterize each of these biases analytically and show both numerically and empirically that they are salient even for real-data settings with a large number of countries. We also offer practical remedies that can be used to obtain more reliable inferences of the effects of trade policies and other time-varying gravity variables, which we make available via an accompanying Stata package called ppml_fe_bias.

Read more
Econometrics

Bias optimal vol-of-vol estimation: the role of window overlapping

We derive a feasible criterion for the bias-optimal selection of the tuning parameters involved in estimating the integrated volatility of the spot volatility via the simple realized estimator by Barndorff-Nielsen and Veraart (2009). Our analytic results are obtained assuming that the spot volatility is a continuous mean-reverting process and that consecutive local windows for estimating the spot volatility are allowed to overlap in a finite sample setting. Moreover, our analytic results support some optimal selections of tuning parameters prescribed in the literature, based on numerical evidence. Interestingly, it emerges that the window-overlapping is crucial for optimizing the finite-sample bias of volatility-of-volatility estimates.

Read more
Econometrics

Bias-Aware Inference in Regularized Regression Models

We consider inference on a regression coefficient under a constraint on the magnitude of the control coefficients. We show that a class of estimators based on an auxiliary regularized regression of the regressor of interest on control variables exactly solves a tradeoff between worst-case bias and variance. We derive "bias-aware" confidence intervals (CIs) based on these estimators, which take into account possible bias when forming the critical value. We show that these estimators and CIs are near-optimal in finite samples for mean squared error and CI length. Our finite-sample results are based on an idealized setting with normal regression errors with known homoskedastic variance, and we provide conditions for asymptotic validity with unknown and possibly heteroskedastic error distribution. Focusing on the case where the constraint on the magnitude of control coefficients is based on an ??p norm ( p?? ), we derive rates of convergence for optimal estimators and CIs under high-dimensional asymptotics that allow the number of regressors to increase more quickly than the number of observations.

Read more

Ready to get started?

Join us today