Featured Researches

Econometrics

A Semiparametric Network Formation Model with Unobserved Linear Heterogeneity

This paper analyzes a semiparametric model of network formation in the presence of unobserved agent-specific heterogeneity. The objective is to identify and estimate the preference parameters associated with homophily on observed attributes when the distributions of the unobserved factors are not parametrically specified. This paper offers two main contributions to the literature on network formation. First, it establishes a new point identification result for the vector of parameters that relies on the existence of a special repressor. The identification proof is constructive and characterizes a closed-form for the parameter of interest. Second, it introduces a simple two-step semiparametric estimator for the vector of parameters with a first-step kernel estimator. The estimator is computationally tractable and can be applied to both dense and sparse networks. Moreover, I show that the estimator is consistent and has a limiting normal distribution as the number of individuals in the network increases. Monte Carlo experiments demonstrate that the estimator performs well in finite samples and in networks with different levels of sparsity.

Read more
Econometrics

A Simple, Short, but Never-Empty Confidence Interval for Partially Identified Parameters

This paper revisits the simple, but empirically salient, problem of inference on a real-valued parameter that is partially identified through upper and lower bounds with asymptotically normal estimators. A simple confidence interval is proposed and is shown to have the following properties: - It is never empty or awkwardly short, including when the sample analog of the identified set is empty. - It is valid for a well-defined pseudotrue parameter whether or not the model is well-specified. - It involves no tuning parameters and minimal computation. Computing the interval requires concentrating out one scalar nuisance parameter. In most cases, the practical result will be simple: To achieve 95% coverage, report the union of a simple 90% (!) confidence interval for the identified set and a standard 95% confidence interval for the pseudotrue parameter. For uncorrelated estimators -- notably if bounds are estimated from distinct subsamples -- and conventional coverage levels, validity of this simple procedure can be shown analytically. The case obtains in the motivating empirical application (de Quidt, Haushofer, and Roth, 2018), in which improvement over existing inference methods is demonstrated. More generally, simulations suggest that the novel confidence interval has excellent length and size control. This is partly because, in anticipation of never being empty, the interval can be made shorter than conventional ones in relevant regions of sample space.

Read more
Econometrics

A Test for Kronecker Product Structure Covariance Matrix

We propose a test that a covariance matrix has Kronecker Product Structure (KPS). KPS implies a reduced rank restriction on an invertible transformation of the covariance matrix and the new procedure is an adaptation of the Kleibergen and Paap (2006) reduced rank test. KPS is a generalization of homoscedasticity and allows for more powerful subvector inference in linear Instrumental Variables (IV) regressions than can be achieved under general covariance matrices. Re-examining sixteen highly cited papers conducting IV regressions, we find that KPS is not rejected in 24 of 30 specifications for moderate sample sizes at the 5% nominal size.

Read more
Econometrics

A Unified Framework for Specification Tests of Continuous Treatment Effect Models

We propose a general framework for the specification testing of continuous treatment effect models. We assume a general residual function, which includes the average and quantile treatment effect models as special cases. The null models are identified under the confoundedness condition and contain a nonparametric weighting function. We propose a test statistic for the null model in which the weighting function is estimated by solving an expanding set of moment equations. We establish the asymptotic distributions of our test statistic under the null hypothesis and under fixed and local alternatives. The proposed test statistic is shown to be more efficient than that constructed from the true weighting function and can detect local alternatives deviated from the null models at the rate of O P ( N ??/2 ) . A simulation method is provided to approximate the null distribution of the test statistic. Monte-Carlo simulations show that our test exhibits a satisfactory finite-sample performance, and an application shows its practical value.

Read more
Econometrics

A User's Guide to Approximate Randomization Tests with a Small Number of Clusters

This paper provides a user's guide to the general theory of approximate randomization tests developed in Canay, Romano, and Shaikh (2017) when specialized to linear regressions with clustered data. Such regressions include settings in which the data is naturally grouped into clusters, such as villages or repeated observations over time on individual units, as well as settings with weak temporal dependence, in which pseudo-clusters may be formed using blocks of consecutive observations. An important feature of the methodology is that it applies to settings in which the number of clusters is small -- even as small as five. We provide a step-by-step algorithmic description of how to implement the test and construct confidence intervals for the quantity of interest. We additionally articulate the main requirements underlying the test, emphasizing in particular common pitfalls that researchers may encounter. Finally, we illustrate the use of the methodology with two applications that further elucidate these points. The required software to replicate these empirical exercises and to aid researchers wishing to employ the methods elsewhere is provided in both {\tt R} and {\tt Stata}.

Read more
Econometrics

A Vector Monotonicity Assumption for Multiple Instruments

When a researcher wishes to use multiple instrumental variables for a single binary treatment, the familiar LATE monotonicity assumption can become restrictive: it requires that all units share a common direction of response even when different instruments are shifted in opposing directions. What I call vector monotonicity, by contrast, simply restricts treatment status to be monotonic in each instrument separately. This is a natural assumption in many contexts, capturing the intuitive notion of "no defiers" for each instrument. I show that in a setting with a binary treatment and multiple discrete instruments, a class of causal parameters is point identified under vector monotonicity, including the average treatment effect among units that are responsive to any particular subset of the instruments. I propose a simple "2SLS-like" estimator for the family of identified treatment effect parameters. An empirical application revisits the labor market returns to college education.

Read more
Econometrics

A dynamic ordered logit model with fixed effects

We study a fixed- T panel data logit model for ordered outcomes that accommodates fixed effects and state dependence. We provide identification results for the autoregressive parameter, regression coefficients, and the threshold parameters in this model. Our results require only four observations on the outcome variable. We provide conditions under which a composite conditional maximum likelihood estimator is consistent and asymptotically normal. We use our estimator to explore the determinants of self-reported health in a panel of European countries over the period 2003-2016. We find that: (i) the autoregressive parameter is positive and analogous to a linear AR(1) coefficient of about 0.25, indicating persistence in health status; (ii) the association between income and health becomes insignificant once we control for unobserved heterogeneity and persistence.

Read more
Econometrics

A first-stage representation for instrumental variables quantile regression

This paper develops a first-stage linear regression representation for the instrumental variables (IV) quantile regression (QR) model. The quantile first-stage is analogous to the least squares case, i.e., a conditional mean regression of the endogenous variables on the instruments, with the difference that the QR case is a weighted regression. The weights are given by the conditional density function of the innovation term in the QR structural model, conditional on the endogeneous and exogenous covariates, and the instruments as well, at a given quantile. In addition, we show that the required Jacobian identification conditions for IVQR models are embedded in the quantile first-stage. The first-stage regression is a natural framework to evaluate the validity of instruments, and in particular, the validity of the Jacobian identification conditions. Hence, we suggest testing procedures to evaluate the adequacy of instruments by evaluating their statistical significance using the first-stage result. This procedure may be specially useful in QR since the instruments may be relevant at some quantiles but not at others, which indicates the use of weak-identification robust inference. Monte Carlo experiments provide numerical evidence that the proposed tests work as expected in terms of empirical size and power in finite samples. An empirical application illustrates that checking for the statistical significance of the instruments at different quantiles is important.

Read more
Econometrics

A mixture autoregressive model based on Gaussian and Student's t -distributions

We introduce a new mixture autoregressive model which combines Gaussian and Student's t mixture components. The model has very attractive properties analogous to the Gaussian and Student's t mixture autoregressive models, but it is more flexible as it enables to model series which consist of both conditionally homoscedastic Gaussian regimes and conditionally heteroscedastic Student's t regimes. The usefulness of our model is demonstrated in an empirical application to the monthly U.S. interest rate spread between the 3-month Treasury bill rate and the effective federal funds rate.

Read more
Econometrics

A model of discrete choice based on reinforcement learning under short-term memory

A family of models of individual discrete choice are constructed by means of statistical averaging of choices made by a subject in a reinforcement learning process, where the subject has short, k-term memory span. The choice probabilities in these models combine in a non-trivial, non-linear way the initial learning bias and the experience gained through learning. The properties of such models are discussed and, in particular, it is shown that probabilities deviate from Luce's Choice Axiom, even if the initial bias adheres to it. Moreover, we shown that the latter property is recovered as the memory span becomes large. Two applications in utility theory are considered. In the first, we use the discrete choice model to generate binary preference relation on simple lotteries. We show that the preferences violate transitivity and independence axioms of expected utility theory. Furthermore, we establish the dependence of the preferences on frames, with risk aversion for gains, and risk seeking for losses. Based on these findings we propose next a parametric model of choice based on the probability maximization principle, as a model for deviations from expected utility principle. To illustrate the approach we apply it to the classical problem of demand for insurance.

Read more

Ready to get started?

Join us today