Featured Researches

Econometrics

Filtered and Unfiltered Treatment Effects with Targeting Instruments

Multivalued treatments are commonplace in applications. We explore the use of discrete-valued instruments to control for selection bias in this setting. We establish conditions under which counterfactual averages and treatment effects are identified for heterogeneous complier groups. These conditions restrict (i) the unobserved heterogeneity in treatment assignment, (ii) how the instruments target the treatments, and optionally (iii) the extent to which counterfactual averages are heterogeneous. We allow for limitations in the analyst's information via the concept of a filtered treatment. Finally, we illustrate the usefulness of our framework by applying it to data from the Student Achievement and Retention Project and the Head Start Impact Study.

Read more
Econometrics

Finite-Sample Average Bid Auction

The paper studies the problem of auction design in a setting where the auctioneer accesses the knowledge of the valuation distribution only through statistical samples. A new framework is established that combines the statistical decision theory with mechanism design. Two optimality criteria, maxmin, and equivariance, are studied along with their implications on the form of auctions. The simplest form of the equivariant auction is the average bid auction, which set individual reservation prices proportional to the average of other bids and historical samples. This form of auction can be motivated by the Gamma distribution, and it sheds new light on the estimation of the optimal price, an irregular parameter. Theoretical results show that it is often possible to use the regular parameter population mean to approximate the optimal price. An adaptive average bid estimator is developed under this idea, and it has the same asymptotic properties as the empirical Myerson estimator. The new proposed estimator has a significantly better performance in terms of value at risk and expected shortfall when the sample size is small.

Read more
Econometrics

Fixed Effects Binary Choice Models with Three or More Periods

We consider fixed effects binary choice models with a fixed number of periods T and without a large support condition on the regressors. If the time-varying unobserved terms are i.i.d. with known distribution F, Chamberlain (2010) shows that the common slope parameter is point-identified if and only if F is logistic. However, he considers in his proof only T=2. We show that actually, the result does not generalize to T>2: the common slope parameter and some parameters of the distribution of the shocks can be identified when F belongs to a family including the logit distribution. Identification is based on a conditional moment restriction. We give necessary and sufficient conditions on the covariates for this restriction to identify the parameters. In addition, we show that under mild conditions, the corresponding GMM estimator reaches the semiparametric efficiency bound when T=3.

Read more
Econometrics

Fixed-k Inference for Conditional Extremal Quantiles

We develop a new extreme value theory for repeated cross-sectional and panel data to construct asymptotically valid confidence intervals (CIs) for conditional extremal quantiles from a fixed number k of nearest-neighbor tail observations. As a by-product, we also construct CIs for extremal quantiles of coefficients in linear random coefficient models. For any fixed k , the CIs are uniformly valid without parametric assumptions over a set of nonparametric data generating processes associated with various tail indices. Simulation studies show that our CIs exhibit superior small-sample coverage and length properties than alternative nonparametric methods based on asymptotic normality. Applying the proposed method to Natality Vital Statistics, we study factors of extremely low birth weights. We find that signs of major effects are the same as those found in preceding studies based on parametric models, but with different magnitudes.

Read more
Econometrics

Flexible Mixture Priors for Large Time-varying Parameter Models

Time-varying parameter (TVP) models often assume that the TVPs evolve according to a random walk. This assumption, however, might be questionable since it implies that coefficients change smoothly and in an unbounded manner. In this paper, we relax this assumption by proposing a flexible law of motion for the TVPs in large-scale vector autoregressions (VARs). Instead of imposing a restrictive random walk evolution of the latent states, we carefully design hierarchical mixture priors on the coefficients in the state equation. These priors effectively allow for discriminating between periods where coefficients evolve according to a random walk and times where the TVPs are better characterized by a stationary stochastic process. Moreover, this approach is capable of introducing dynamic sparsity by pushing small parameter changes towards zero if necessary. The merits of the model are illustrated by means of two applications. Using synthetic data we show that our approach yields precise parameter estimates. When applied to US data, the model reveals interesting patterns of low-frequency dynamics in coefficients and forecasts well relative to a wide range of competing models.

Read more
Econometrics

Forecasting Quarterly Brazilian GDP: Univariate Models Approach

Gross domestic product (GDP) is an important economic indicator that aggregates useful information to assist economic agents and policymakers in their decision-making process. In this context, GDP forecasting becomes a powerful decision optimization tool in several areas. In order to contribute in this direction, we investigated the efficiency of classical time series models and the class of state-space models, applied to Brazilian gross domestic product. The models used were: a Seasonal Autoregressive Integrated Moving Average (SARIMA) and a Holt-Winters method, which are classical time series models; and the dynamic linear model, a state-space model. Based on statistical metrics of model comparison, the dynamic linear model presented the best forecasting model and fit performance for the analyzed period, also incorporating the growth rate structure significantly.

Read more
Econometrics

Forecasting With Factor-Augmented Quantile Autoregressions: A Model Averaging Approach

This paper considers forecasts of the growth and inflation distributions of the United Kingdom with factor-augmented quantile autoregressions under a model averaging framework. We investigate model combinations across models using weights that minimise the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Quantile Regression Information Criterion (QRIC) as well as the leave-one-out cross validation criterion. The unobserved factors are estimated by principal components of a large panel with N predictors over T periods under a recursive estimation scheme. We apply the aforementioned methods to the UK GDP growth and CPI inflation rate. We find that, on average, for GDP growth, in terms of coverage and final prediction error, the equal weights or the weights obtained by the AIC and BIC perform equally well but are outperformed by the QRIC and the Jackknife approach on the majority of the quantiles of interest. In contrast, the naive QAR(1) model of inflation outperforms all model averaging methodologies.

Read more
Econometrics

Forecasting with Bayesian Grouped Random Effects in Panel Data

In this paper, we estimate and leverage latent constant group structure to generate the point, set, and density forecasts for short dynamic panel data. We implement a nonparametric Bayesian approach to simultaneously identify coefficients and group membership in the random effects which are heterogeneous across groups but fixed within a group. This method allows us to flexibly incorporate subjective prior knowledge on the group structure that potentially improves the predictive accuracy. In Monte Carlo experiments, we demonstrate that our Bayesian grouped random effects (BGRE) estimators produce accurate estimates and score predictive gains over standard panel data estimators. With a data-driven group structure, the BGRE estimators exhibit comparable accuracy of clustering with the Kmeans algorithm and outperform a two-step Bayesian grouped estimator whose group structure relies on Kmeans. In the empirical analysis, we apply our method to forecast the investment rate across a broad range of firms and illustrate that the estimated latent group structure improves forecasts relative to standard panel data estimators.

Read more
Econometrics

Forecasts with Bayesian vector autoregressions under real time conditions

This paper investigates the sensitivity of forecast performance measures to taking a real time versus pseudo out-of-sample perspective. We use monthly vintages for the United States (US) and the Euro Area (EA) and estimate a set of vector autoregressive (VAR) models of different sizes with constant and time-varying parameters (TVPs) and stochastic volatility (SV). Our results suggest differences in the relative ordering of model performance for point and density forecasts depending on whether real time data or truncated final vintages in pseudo out-of-sample simulations are used for evaluating forecasts. No clearly superior specification for the US or the EA across variable types and forecast horizons can be identified, although larger models featuring TVPs appear to be affected the least by missing values and data revisions. We identify substantial differences in performance metrics with respect to whether forecasts are produced for the US or the EA.

Read more
Econometrics

Forward-Selected Panel Data Approach for Program Evaluation

Policy evaluation is central to economic data analysis, but economists mostly work with observational data in view of limited opportunities to carry out controlled experiments. In the potential outcome framework, the panel data approach (Hsiao, Ching and Wan, 2012) constructs the counterfactual by exploiting the correlation between cross-sectional units in panel data. The choice of cross-sectional control units, a key step in its implementation, is nevertheless unresolved in data-rich environment when many possible controls are at the researcher's disposal. We propose the forward selection method to choose control units, and establish validity of the post-selection inference. Our asymptotic framework allows the number of possible controls to grow much faster than the time dimension. The easy-to-implement algorithms and their theoretical guarantee extend the panel data approach to big data settings.

Read more

Ready to get started?

Join us today