Featured Researches

Econometrics

Monotonicity-Constrained Nonparametric Estimation and Inference for First-Price Auctions

We propose a new nonparametric estimator for first-price auctions with independent private values that imposes the monotonicity constraint on the estimated inverse bidding strategy. We show that our estimator has a smaller asymptotic variance than that of Guerre, Perrigne and Vuong's (2000) estimator. In addition to establishing pointwise asymptotic normality of our estimator, we provide a bootstrap-based approach to constructing uniform confidence bands for the density function of latent valuations.

Read more
Econometrics

Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear IV Models

We offer straightforward theoretical results that justify incorporating machine learning in the standard linear instrumental variable setting. The key idea is to use machine learning, combined with sample-splitting, to predict the treatment variable from the instrument and any exogenous covariates, and then use this predicted treatment and the covariates as technical instruments to recover the coefficients in the second-stage. This allows the researcher to extract non-linear co-variation between the treatment and instrument that may dramatically improve estimation precision and robustness by boosting instrument strength. Importantly, we constrain the machine-learned predictions to be linear in the exogenous covariates, thus avoiding spurious identification arising from non-linear relationships between the treatment and the covariates. We show that this approach delivers consistent and asymptotically normal estimates under weak conditions and that it may be adapted to be semiparametrically efficient (Chamberlain, 1992). Our method preserves standard intuitions and interpretations of linear instrumental variable methods, including under weak identification, and provides a simple, user-friendly upgrade to the applied economics toolbox. We illustrate our method with an example in law and criminal justice, examining the causal effect of appellate court reversals on district court sentencing decisions.

Read more
Econometrics

Multi-frequency-band tests for white noise under heteroskedasticity

This paper proposes a new family of multi-frequency-band (MFB) tests for the white noise hypothesis by using the maximum overlap discrete wavelet packet transform (MODWPT). The MODWPT allows the variance of a process to be decomposed into the variance of its components on different equal-length frequency sub-bands, and the MFB tests then measure the distance between the MODWPT-based variance ratio and its theoretical null value jointly over several frequency sub-bands. The resulting MFB tests have the chi-squared asymptotic null distributions under mild conditions, which allow the data to be heteroskedastic. The MFB tests are shown to have the desirable size and power performance by simulation studies, and their usefulness is further illustrated by two applications.

Read more
Econometrics

Multiway Cluster Robust Double/Debiased Machine Learning

This paper investigates double/debiased machine learning (DML) under multiway clustered sampling environments. We propose a novel multiway cross fitting algorithm and a multiway DML estimator based on this algorithm. We also develop a multiway cluster robust standard error formula. Simulations indicate that the proposed procedure has favorable finite sample performance. Applying the proposed method to market share data for demand analysis, we obtain larger two-way cluster robust standard errors than non-robust ones.

Read more
Econometrics

New Approaches to Robust Inference on Market (Non-)Efficiency, Volatility Clustering and Nonlinear Dependence

Many key variables in finance, economics and risk management, including financial returns and foreign exchange rates, exhibit nonlinear dependence, heterogeneity and heavy-tailedness of some usually largely unknown type. The presence of non-linear dependence (usually modelled using GARCH-type dynamics) and heavy-tailedness may make problematic the analysis of (non-)efficiency, volatility clustering and predictive regressions in economic and financial markets using traditional approaches that appeal to asymptotic normality of sample autocorrelation functions (ACFs) of returns and their squares. The paper presents several new approaches to deal with the above problems. We provide the results that motivate the use of measures of market (non-)efficiency, volatility clustering and nonlinear dependence based on (small) powers of absolute returns and their signed versions. The paper provides asymptotic theory for sample analogues of the above measures in the case of general time series, including GARCH-type processes. It further develops new approaches to robust inference on them in the case of general GARCH-type processes exhibiting heavy-tailedness properties. The approaches are based on robust inference methods exploiting conservativeness properties of t-statistics Ibragimov and Muller (2010,2016) and several new results on their applicability in the settings considered. In the approaches, estimates of parameters of interest are computed for groups of data and the inference is based on t-statistics in resulting group estimates. This results in valid robust inference under a wide range of heterogeneity and dependence assumptions satisfied in financial and economic markets. Numerical results and empirical applications confirm advantages of the new approaches over existing ones and their wide applicability.

Read more
Econometrics

New robust inference for predictive regressions

We propose two robust methods for testing hypotheses on unknown parameters of predictive regression models under heterogeneous and persistent volatility as well as endogenous, persistent and/or fat-tailed regressors and errors. The proposed robust testing approaches are applicable both in the case of discrete and continuous time models. Both of the methods use the Cauchy estimator to effectively handle the problems of endogeneity, persistence and/or fat-tailedness in regressors and errors. The difference between our two methods is how the heterogeneous volatility is controlled. The first method relies on robust t-statistic inference using group estimators of a regression parameter of interest proposed in Ibragimov and Muller, 2010. It is simple to implement, but requires the exogenous volatility assumption. To relax the exogenous volatility assumption, we propose another method which relies on the nonparametric correction of volatility. The proposed methods perform well compared with widely used alternative inference procedures in terms of their finite sample properties.

Read more
Econometrics

Non-Identifiability in Network Autoregressions

We study identification in autoregressions defined on a general network. Most identification conditions that are available for these models either rely on repeated observations, are only sufficient, or require strong distributional assumptions. We derive conditions that apply even if only one observation of a network is available, are necessary and sufficient for identification, and require weak distributional assumptions. We find that the models are generically identified even without repeated observations, and analyze the combinations of the interaction matrix and the regressor matrix for which identification fails. This is done both in the original model and after certain transformations in the sample space, the latter case being important for some fixed effects specifications.

Read more
Econometrics

Non-Manipulable Machine Learning: The Incentive Compatibility of Lasso

We consider situations where a user feeds her attributes to a machine learning method that tries to predict her best option based on a random sample of other users. The predictor is incentive-compatible if the user has no incentive to misreport her covariates. Focusing on the popular Lasso estimation technique, we borrow tools from high-dimensional statistics to characterize sufficient conditions that ensure that Lasso is incentive compatible in large samples. In particular, we show that incentive compatibility is achieved if the tuning parameter is kept above some threshold. We present simulations that illustrate how this can be done in practice.

Read more
Econometrics

Non-linear interlinkages and key objectives amongst the Paris Agreement and the Sustainable Development Goals

The United Nations' ambitions to combat climate change and prosper human development are manifested in the Paris Agreement and the Sustainable Development Goals (SDGs), respectively. These are inherently inter-linked as progress towards some of these objectives may accelerate or hinder progress towards others. We investigate how these two agendas influence each other by defining networks of 18 nodes, consisting of the 17 SDGs and climate change, for various groupings of countries. We compute a non-linear measure of conditional dependence, the partial distance correlation, given any subset of the remaining 16 variables. These correlations are treated as weights on edges, and weighted eigenvector centralities are calculated to determine the most important nodes. We find that SDG 6, clean water and sanitation, and SDG 4, quality education, are most central across nearly all groupings of countries. In developing regions, SDG 17, partnerships for the goals, is strongly connected to the progress of other objectives in the two agendas whilst, somewhat surprisingly, SDG 8, decent work and economic growth, is not as important in terms of eigenvector centrality.

Read more
Econometrics

Non-stationary GARCH modelling for fitting higher order moments of financial series within moving time windows

Here, we have analysed a GARCH(1,1) model with the aim to fit higher order moments for different companies' stock prices. When we assume a gaussian conditional distribution, we fail to capture any empirical data when fitting the first three even moments of financial time series. We show instead that a double gaussian conditional probability distribution better captures the higher order moments of the data. To demonstrate this point, we construct regions (phase diagrams), in the fourth and sixth order standardised moment space, where a GARCH(1,1) model can be used to fit these moments and compare them with the corresponding moments from empirical data for different sectors of the economy. We found that the ability of the GARCH model with a double gaussian conditional distribution to fit higher order moments is dictated by the time window our data spans. We can only fit data collected within specific time window lengths and only with certain parameters of the conditional double gaussian distribution. In order to incorporate the non-stationarity of financial series, we assume that the parameters of the GARCH model have time dependence.

Read more

Ready to get started?

Join us today