Featured Researches

Econometrics

Sparse HP Filter: Finding Kinks in the COVID-19 Contact Rate

In this paper, we estimate the time-varying COVID-19 contact rate of a Susceptible-Infected-Recovered (SIR) model. Our measurement of the contact rate is constructed using data on actively infected, recovered and deceased cases. We propose a new trend filtering method that is a variant of the Hodrick-Prescott (HP) filter, constrained by the number of possible kinks. We term it the sparse HP filter and apply it to daily data from five countries: Canada, China, South Korea, the UK and the US. Our new method yields the kinks that are well aligned with actual events in each country. We find that the sparse HP filter provides a fewer kinks than the ℓ 1 trend filter, while both methods fitting data equally well. Theoretically, we establish risk consistency of both the sparse HP and ℓ 1 trend filters. Ultimately, we propose to use time-varying contact growth rates to document and monitor outbreaks of COVID-19.

Read more
Econometrics

Sparse network asymptotics for logistic regression

Consider a bipartite network where N consumers choose to buy or not to buy M different products. This paper considers the properties of the logistic regression of the N×M array of i-buys-j purchase decisions, [ Y ij ] 1≤i≤N,1≤j≤M , onto known functions of consumer and product attributes under asymptotic sequences where (i) both N and M grow large and (ii) the average number of products purchased per consumer is finite in the limit. This latter assumption implies that the network of purchases is sparse: only a (very) small fraction of all possible purchases are actually made (concordant with many real-world settings). Under sparse network asymptotics, the first and last terms in an extended Hoeffding-type variance decomposition of the score of the logit composite log-likelihood are of equal order. In contrast, under dense network asymptotics, the last term is asymptotically negligible. Asymptotic normality of the logistic regression coefficients is shown using a martingale central limit theorem (CLT) for triangular arrays. Unlike in the dense case, the normality result derived here also holds under degeneracy of the network graphon. Relatedly, when there happens to be no dyadic dependence in the dataset in hand, it specializes to recently derived results on the behavior of logistic regression with rare events and iid data. Sparse network asymptotics may lead to better inference in practice since they suggest variance estimators which (i) incorporate additional sources of sampling variation and (ii) are valid under varying degrees of dyadic dependence.

Read more
Econometrics

Sparse time-varying parameter VECMs with an application to modeling electricity prices

In this paper we propose a time-varying parameter (TVP) vector error correction model (VECM) with heteroscedastic disturbances. We combine a set of econometric techniques for dynamic model specification in an automatic fashion. We employ continuous global-local shrinkage priors for pushing the parameter space towards sparsity. In a second step, we post-process the cointegration relationships, the autoregressive coefficients and the covariance matrix via minimizing Lasso-type loss functions to obtain truly sparse estimates. This two-step approach alleviates overfitting concerns and reduces parameter estimation uncertainty, while providing estimates for the number of cointegrating relationships that varies over time. Our proposed econometric framework is applied to modeling European electricity prices and shows gains in forecast performance against a set of established benchmark models.

Read more
Econometrics

Spatial Correlation Robust Inference

We propose a method for constructing confidence intervals that account for many forms of spatial correlation. The interval has the familiar `estimator plus and minus a standard error times a critical value' form, but we propose new methods for constructing the standard error and the critical value. The standard error is constructed using population principal components from a given `worst-case' spatial covariance model. The critical value is chosen to ensure coverage in a benchmark parametric model for the spatial correlations. The method is shown to control coverage in large samples whenever the spatial correlation is weak, i.e., with average pairwise correlations that vanish as the sample size gets large. We also provide results on correct coverage in a restricted but nonparametric class of strong spatial correlations, as well as on the efficiency of the method. In a design calibrated to match economic activity in U.S. states the method outperforms previous suggestions for spatially robust inference about the population mean.

Read more
Econometrics

Spatial Differencing for Sample Selection Models with Unobserved Heterogeneity

This paper derives identification, estimation, and inference results using spatial differencing in sample selection models with unobserved heterogeneity. We show that under the assumption of smooth changes across space of the unobserved sub-location specific heterogeneities and inverse Mills ratio, key parameters of a sample selection model are identified. The smoothness of the sub-location specific heterogeneities implies a correlation in the outcomes. We assume that the correlation is restricted within a location or cluster and derive asymptotic results showing that as the number of independent clusters increases, the estimators are consistent and asymptotically normal. We also propose a formula for standard error estimation. A Monte-Carlo experiment illustrates the small sample properties of our estimator. The application of our procedure to estimate the determinants of the municipality tax rate in Finland shows the importance of accounting for unobserved heterogeneity.

Read more
Econometrics

Specification Testing in Nonparametric Instrumental Quantile Regression

There are many environments in econometrics which require nonseparable modeling of a structural disturbance. In a nonseparable model with endogenous regressors, key conditions are validity of instrumental variables and monotonicity of the model in a scalar unobservable variable. Under these conditions the nonseparable model is equivalent to an instrumental quantile regression model. A failure of the key conditions, however, makes instrumental quantile regression potentially inconsistent. This paper develops a methodology for testing the hypothesis whether the instrumental quantile regression model is correctly specified. Our test statistic is asymptotically normally distributed under correct specification and consistent against any alternative model. In addition, test statistics to justify the model simplification are established. Finite sample properties are examined in a Monte Carlo study and an empirical illustration is provided.

Read more
Econometrics

Specification tests for generalized propensity scores using double projections

This paper proposes a new class of nonparametric tests for the correct specification of generalized propensity score models. The test procedure is based on two different projection arguments, which lead to test statistics with several appealing properties. They accommodate high-dimensional covariates; are asymptotically invariant to the estimation method used to estimate the nuisance parameters and do not requite estimators to be root-n asymptotically linear; are fully data-driven and do not require tuning parameters, can be written in closed-form, facilitating the implementation of an easy-to-use multiplier bootstrap procedure. We show that our proposed tests are able to detect a broad class of local alternatives converging to the null at the parametric rate. Monte Carlo simulation studies indicate that our double projected tests have much higher power than other tests available in the literature, highlighting their practical appeal.

Read more
Econometrics

Spectral Targeting Estimation of λ -GARCH models

This paper presents a novel estimator of orthogonal GARCH models, which combines (eigenvalue and -vector) targeting estimation with stepwise (univariate) estimation. We denote this the spectral targeting estimator. This two-step estimator is consistent under finite second order moments, while asymptotic normality holds under finite fourth order moments. The estimator is especially well suited for modelling larger portfolios: we compare the empirical performance of the spectral targeting estimator to that of the quasi maximum likelihood estimator for five portfolios of 25 assets. The spectral targeting estimator dominates in terms of computational complexity, being up to 57 times faster in estimation, while both estimators produce similar out-of-sample forecasts, indicating that the spectral targeting estimator is well suited for high-dimensional empirical applications.

Read more
Econometrics

Spillovers of Program Benefits with Mismeasured Networks

In studies of program evaluation under network interference, correctly measuring spillovers of the intervention is crucial for making appropriate policy recommendations. However, increasing empirical evidence has shown that network links are often measured with errors. This paper explores the identification and estimation of treatment and spillover effects when the network is mismeasured. I propose a novel method to nonparametrically point-identify the treatment and spillover effects, when two network observations are available. The method can deal with a large network with missing or misreported links and possesses several attractive features: (i) it allows heterogeneous treatment and spillover effects; (ii) it does not rely on modelling network formation or its misclassification probabilities; and (iii) it accommodates samples that are correlated in overlapping ways. A semiparametric estimation approach is proposed, and the analysis is applied to study the spillover effects of an insurance information program on the insurance adoption decisions.

Read more
Econometrics

Split-then-Combine simplex combination and selection of forecasters

This paper considers the Split-Then-Combine (STC) approach (Arroyo and de Juan, 2014) to combine forecasts inside the simplex space, the sample space of positive weights adding up to one. As it turns out, the simplicial statistic given by the center of the simplex compares favorably against the fixed-weight, average forecast. Besides, we also develop a Combine-After-Selection (CAS) method to get rid of redundant forecasters. We apply these two approaches to make out-of-sample one-step ahead combinations and subcombinations of forecasts for several economic variables. This methodology is particularly useful when the sample size is smaller than the number of forecasts, a case where other methods (e.g., Least Squares (LS) or Principal Component Analysis (PCA)) are not applicable.

Read more

Ready to get started?

Join us today