Featured Researches

Econometrics

High-dimensional mixed-frequency IV regression

This paper introduces a high-dimensional linear IV regression for the data sampled at mixed frequencies. We show that the high-dimensional slope parameter of a high-frequency covariate can be identified and accurately estimated leveraging on a low-frequency instrumental variable. The distinguishing feature of the model is that it allows handing high-dimensional datasets without imposing the approximate sparsity restrictions. We propose a Tikhonov-regularized estimator and derive the convergence rate of its mean-integrated squared error for time series data. The estimator has a closed-form expression that is easy to compute and demonstrates excellent performance in our Monte Carlo experiments. We estimate the real-time price elasticity of supply on the Australian electricity spot market. Our estimates suggest that the supply is relatively inelastic and that its elasticity is heterogeneous throughout the day.

Read more
Econometrics

Hopf Bifurcation from new-Keynesian Taylor rule to Ramsey Optimal Policy

This paper compares different implementations of monetary policy in a new-Keynesian setting. We can show that a shift from Ramsey optimal policy under short-term commitment (based on a negative feedback mechanism) to a Taylor rule (based on a positive feedback mechanism) corresponds to a Hopf bifurcation with opposite policy advice and a change of the dynamic properties. This bifurcation occurs because of the ad hoc assumption that interest rate is a forward-looking variable when policy targets (inflation and output gap) are forward-looking variables in the new-Keynesian theory.

Read more
Econometrics

Horseshoe Prior Bayesian Quantile Regression

This paper extends the horseshoe prior of Carvalho et al. (2010) to Bayesian quantile regression (HS-BQR) and provides a fast sampling algorithm for computation in high dimensions. The performance of the proposed HS-BQR is evaluated on Monte Carlo simulations and a high dimensional Growth-at-Risk (GaR) forecasting application for the U.S. The Monte Carlo design considers several sparsity and error structures. Compared to alternative shrinkage priors, the proposed HS-BQR yields better (or at worst similar) performance in coefficient bias and forecast error. The HS-BQR is particularly potent in sparse designs and in estimating extreme quantiles. As expected, the simulations also highlight that identifying quantile specific location and scale effects for individual regressors in dense DGPs requires substantial data. In the GaR application, we forecast tail risks as well as complete forecast densities using the McCracken and Ng (2020) database. Quantile specific and density calibration score functions show that the HS-BQR provides the best performance, especially at short and medium run horizons. The ability to produce well calibrated density forecasts and accurate downside risk measures in large data contexts makes the HS-BQR a promising tool for nowcasting applications and recession modelling.

Read more
Econometrics

Hours Worked and the U.S. Distribution of Real Annual Earnings 1976-2016

We examine the impact of annual hours worked on annual earnings by decomposing changes in the real annual earnings distribution into composition, structural and hours effects. We do so via a nonseparable simultaneous model of hours, wages and earnings. Using the Current Population Survey for the survey years 1976--2019, we find that changes in the female distribution of annual hours of work are important in explaining movements in inequality in female annual earnings. This captures the substantial changes in their employment behavior over this period. Movements in the male hours distribution only affect the lower part of their earnings distribution and reflect the sensitivity of these workers' annual hours of work to cyclical factors.

Read more
Econometrics

How is Machine Learning Useful for Macroeconomic Forecasting?

We move beyond "Is Machine Learning Useful for Macroeconomic Forecasting?" by adding the "how". The current forecasting literature has focused on matching specific variables and horizons with a particularly successful algorithm. In contrast, we study the usefulness of the underlying features driving ML gains over standard macroeconometric methods. We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments. To do so, we design experiments that allow to identify the "treatment" effects of interest. We conclude that (i) nonlinearity is the true game changer for macroeconomic prediction, (ii) the standard factor model remains the best regularization, (iii) K-fold cross-validation is the best practice and (iv) the L 2 is preferred to the ϵ ¯ -insensitive in-sample loss. The forecasting gains of nonlinear techniques are associated with high macroeconomic uncertainty, financial stress and housing bubble bursts. This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions.

Read more
Econometrics

Hypothetical bias in stated choice experiments: Part I. Integrative synthesis of empirical evidence and conceptualisation of external validity

The notion of hypothetical bias (HB) constitutes, arguably, the most fundamental issue in relation to the use of hypothetical survey methods. Whether or to what extent choices of survey participants and subsequent inferred estimates translate to real-world settings continues to be debated. While HB has been extensively studied in the broader context of contingent valuation, it is much less understood in relation to choice experiments (CE). This paper reviews the empirical evidence for HB in CE in various fields of applied economics and presents an integrative framework for how HB relates to external validity. Results suggest mixed evidence on the prevalence, extent and direction of HB as well as considerable context and measurement dependency. While HB is found to be an undeniable issue when conducting CEs, the empirical evidence on HB does not render CEs unable to represent real-world preferences. While health-related choice experiments often find negligible degrees of HB, experiments in consumer behaviour and transport domains suggest that significant degrees of HB are ubiquitous. Assessments of bias in environmental valuation studies provide mixed evidence. Also, across these disciplines many studies display HB in their total willingness to pay estimates and opt-in rates but not in their hypothetical marginal rates of substitution (subject to scale correction). Further, recent findings in psychology and brain imaging studies suggest neurocognitive mechanisms underlying HB that may explain some of the discrepancies and unexpected findings in the mainstream CE literature. The review also observes how the variety of operational definitions of HB prohibits consistent measurement of HB in CE. The paper further identifies major sources of HB and possible moderating factors. Finally, it explains how HB represents one component of the wider concept of external validity.

Read more
Econometrics

Hypothetical bias in stated choice experiments: Part II. Macro-scale analysis of literature and effectiveness of bias mitigation methods

This paper reviews methods of hypothetical bias (HB) mitigation in choice experiments (CEs). It presents a bibliometric analysis and summary of empirical evidence of their effectiveness. The paper follows the review of empirical evidence on the existence of HB presented in Part I of this study. While the number of CE studies has rapidly increased since 2010, the critical issue of HB has been studied in only a small fraction of CE studies. The present review includes both ex-ante and ex-post bias mitigation methods. Ex-ante bias mitigation methods include cheap talk, real talk, consequentiality scripts, solemn oath scripts, opt-out reminders, budget reminders, honesty priming, induced truth telling, indirect questioning, time to think and pivot designs. Ex-post methods include follow-up certainty calibration scales, respondent perceived consequentiality scales, and revealed-preference-assisted estimation. It is observed that the use of mitigation methods markedly varies across different sectors of applied economics. The existing empirical evidence points to their overall effectives in reducing HB, although there is some variation. The paper further discusses how each mitigation method can counter a certain subset of HB sources. Considering the prevalence of HB in CEs and the effectiveness of bias mitigation methods, it is recommended that implementation of at least one bias mitigation method (or a suitable combination where possible) becomes standard practice in conducting CEs. Mitigation method(s) suited to the particular application should be implemented to ensure that inferences and subsequent policy decisions are as much as possible free of HB.

Read more
Econometrics

Identifiability and Estimation of Possibly Non-Invertible SVARMA Models: A New Parametrisation

This article deals with parameterisation, identifiability, and maximum likelihood (ML) estimation of possibly non-invertible structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. In contrast to previous literature, the novel representation of the MA polynomial matrix using the Wiener-Hopf factorisation (WHF) focuses on the multivariate nature of the model, generates insights into its structure, and uses this structure for devising optimisation algorithms. In particular, it allows to parameterise the location of determinantal zeros inside and outside the unit circle, and it allows for MA zeros at zero, which can be interpreted as informational delays. This is highly relevant for data-driven evaluation of Dynamic Stochastic General Equilibrium (DSGE) models. Typically imposed identifying restrictions on the shock transmission matrix as well as on the determinantal root location are made testable. Furthermore, we provide low level conditions for asymptotic normality of the ML estimator and analytic expressions for the score and the information matrix. As application, we estimate the Blanchard and Quah model and show that our method provides further insights regarding non-invertibility using a standard macroeconometric model. These and further analyses are implemented in a well documented R-package.

Read more
Econometrics

Identification and Estimation of A Rational Inattention Discrete Choice Model with Bayesian Persuasion

This paper studies the semi-parametric identification and estimation of a rational inattention model with Bayesian persuasion. The identification requires the observation of a cross-section of market-level outcomes. The empirical content of the model can be characterized by three moment conditions. A two-step estimation procedure is proposed to avoid computation complexity in the structural model. In the empirical application, I study the persuasion effect of Fox News in the 2000 presidential election. Welfare analysis shows that persuasion will not influence voters with high school education but will generate higher dispersion in the welfare of voters with a partial college education and decrease the dispersion in the welfare of voters with a bachelors degree.

Read more
Econometrics

Identification and Estimation of Discrete Choice Models with Unobserved Choice Sets

We propose a framework for nonparametric identification and estimation of discrete choice models with unobserved choice sets. We recover the joint distribution of choice sets and preferences from a panel dataset on choices. We assume that either the latent choice sets are sparse or that the panel is sufficiently long. Sparsity requires the number of possible choice sets to be relatively small. It is satisfied, for instance, when the choice sets are nested, or when they form a partition. Our estimation procedure is computationally fast and uses mixed-integer optimization to recover the sparse support of choice sets. Analyzing the ready-to-eat cereal industry using a household scanner dataset, we find that ignoring the unobservability of choice sets can lead to biased estimates of preferences due to significant latent heterogeneity in choice sets.

Read more

Ready to get started?

Join us today