Featured Researches

Econometrics

Estimation of Tempered Stable Lévy Models of Infinite Variation

In this paper we propose a new method for the estimation of a semiparametric tempered stable Lévy model. The estimation procedure combines iteratively an approximate semiparametric method of moment estimator, Truncated Realized Quadratic Variations (TRQV), and a newly found small-time high-order approximation for the optimal threshold of the TRQV of tempered stable processes. The method is tested via simulations to estimate the volatility and the Blumenthal-Getoor index of the generalized CGMY model as well as the integrated volatility of a Heston type model with CGMY jumps. The method outperforms other efficient alternatives proposed in the literature.

Read more
Econometrics

Evaluating (weighted) dynamic treatment effects by double machine learning

We consider evaluating the causal effects of dynamic treatments, i.e. of multiple treatment sequences in various periods, based on double machine learning to control for observed, time-varying covariates in a data-driven way under a selection-on-observables assumption. To this end, we make use of so-called Neyman-orthogonal score functions, which imply the robustness of treatment effect estimation to moderate (local) misspecifications of the dynamic outcome and treatment models. This robustness property permits approximating outcome and treatment models by double machine learning even under high dimensional covariates and is combined with data splitting to prevent overfitting. In addition to effect estimation for the total population, we consider weighted estimation that permits assessing dynamic treatment effects in specific subgroups, e.g. among those treated in the first treatment period. We demonstrate that the estimators are asymptotically normal and n − − √ -consistent under specific regularity conditions and investigate their finite sample properties in a simulation study. Finally, we apply the methods to the Job Corps study in order to assess different sequences of training programs under a large set of covariates.

Read more
Econometrics

Evaluating Policies Early in a Pandemic: Bounding Policy Effects with Nonrandomly Missing Data

During the early stages of the Covid-19 pandemic, national and local governments introduced a large number of policies, particularly non-pharmaceutical interventions, to combat the spread of Covid-19. Understanding the effects that these policies had (both on Covid-19 cases and on other outcomes) is particularly challenging though because (i) Covid-19 testing was not widely available, (ii) the availability of tests varied across locations, and (iii) the tests that were available were generally targeted towards individuals meeting certain eligibility criteria. In this paper, we propose a new approach to evaluate the effect of policies early in the pandemic that accommodates limited and nonrandom testing. Our approach results in (generally informative) bounds on the effect of the policy on actual cases and in point identification of the effect of the policy on other outcomes. We apply our approach to study the effect of Tennessee's open-testing policy during the early stage of the pandemic. For this policy, we find suggestive evidence that the policy decreased the number of Covid-19 cases in the state relative to what they would have been if the policy had not been implemented.

Read more
Econometrics

Evaluating the Effectiveness of Regional Lockdown Policies in the Containment of Covid-19: Evidence from Pakistan

To slow down the spread of Covid-19, administrative regions within Pakistan imposed complete and partial lockdown restrictions on socio-economic activities, religious congregations, and human movement. Here we examine the impact of regional lockdown strategies on Covid-19 outcomes. After conducting econometric analyses (Regression Discontinuity and Negative Binomial Regressions) on official data from the National Institute of Health (NIH) Pakistan, we find that the strategies did not lead to a similar level of Covid-19 caseload (positive cases and deaths) in all regions. In terms of reduction in the overall caseload (positive cases and deaths), compared to no lockdown, complete and partial lockdown appeared to be effective in four regions: Balochistan, Gilgit Baltistan (GT), Islamabad Capital Territory (ICT), and Azad Jammu and Kashmir (AJK). Contrarily, complete and partial lockdowns did not appear to be effective in containing the virus in the three largest provinces of Punjab, Sindh, and Khyber Pakhtunkhwa (KPK). The observed regional heterogeneity in the effectiveness of lockdowns advocates for a careful use of lockdown strategies based on the demographic, social, and economic factors.

Read more
Econometrics

Exact Computation of Maximum Rank Correlation Estimator

In this paper we provide a computation algorithm to get a global solution for the maximum rank correlation estimator using the mixed integer programming (MIP) approach. We construct a new constrained optimization problem by transforming all indicator functions into binary parameters to be estimated and show that it is equivalent to the original problem. We also consider an application of the best subset rank prediction and show that the original optimization problem can be reformulated as MIP. We derive the non-asymptotic bound for the tail probability of the predictive performance measure. We investigate the performance of the MIP algorithm by an empirical example and Monte Carlo simulations.

Read more
Econometrics

Exact Trend Control in Estimating Treatment Effects Using Panel Data with Heterogenous Trends

For a panel model considered by Abadie et al. (2010), the counterfactual outcomes constructed by Abadie et al., Hsiao et al. (2012), and Doudchenko and Imbens (2017) may all be confounded by uncontrolled heterogenous trends. Based on exact-matching on the trend predictors, I propose new methods of estimating the model-specific treatment effects, which are free from heterogenous trends. When applied to Abadie et al.'s (2010) model and data, the new estimators suggest considerably smaller effects of California's tobacco control program.

Read more
Econometrics

Experimental Design under Network Interference

This paper discusses the problem of the design of a two-wave experiment under network interference. We consider (i) a possibly fully connected network, (ii) spillover effects occurring across neighbors, (iii) local dependence of unobservables characteristics. We allow for a class of estimands of interest which includes the average effect of treating the entire network, the average spillover effects, average direct effects, and interactions of the latter two. We propose a design mechanism where the experimenter optimizes over participants and treatment assignments to minimize the variance of the estimators of interest, using the first-wave experiment for estimation of the variance. We characterize conditions on the first and second wave experiments to guarantee unconfounded experimentation, we showcase tradeoffs in the choice of the pilot's size, and we formally characterize the pilot's size relative to the main experiment. We derive asymptotic properties of estimators of interest under the proposed design mechanism and regret guarantees of the proposed method. Finally we illustrate the advantage of the method over state-of-art methodologies on simulated and real-world networks.

Read more
Econometrics

Extreme dependence for multivariate data

This article proposes a generalized notion of extreme multivariate dependence between two random vectors which relies on the extremality of the cross-covariance matrix between these two vectors. Using a partial ordering on the cross-covariance matrices, we also generalize the notion of positive upper dependence. We then proposes a means to quantify the strength of the dependence between two given multivariate series and to increase this strength while preserving the marginal distributions. This allows for the design of stress-tests of the dependence between two sets of financial variables, that can be useful in portfolio management or derivatives pricing.

Read more
Econometrics

Fair Policy Targeting

One of the major concerns of targeting interventions on individuals in social welfare programs is discrimination: individualized treatments may induce disparities on sensitive attributes such as age, gender, or race. This paper addresses the question of the design of fair and efficient treatment allocation rules. We adopt the non-maleficence perspective of "first do no harm": we propose to select the fairest allocation within the Pareto frontier. We provide envy-freeness justifications to novel counterfactual notions of fairness. We discuss easy-to-implement estimators of the policy function, by casting the optimization into a mixed-integer linear program formulation. We derive regret bounds on the unfairness of the estimated policy function, and small sample guarantees on the Pareto frontier. Finally, we illustrate our method using an application from education economics.

Read more
Econometrics

Fast Algorithms for the Quantile Regression Process

The widespread use of quantile regression methods depends crucially on the existence of fast algorithms. Despite numerous algorithmic improvements, the computation time is still non-negligible because researchers often estimate many quantile regressions and use the bootstrap for inference. We suggest two new fast algorithms for the estimation of a sequence of quantile regressions at many quantile indexes. The first algorithm applies the preprocessing idea of Portnoy and Koenker (1997) but exploits a previously estimated quantile regression to guess the sign of the residuals. This step allows for a reduction of the effective sample size. The second algorithm starts from a previously estimated quantile regression at a similar quantile index and updates it using a single Newton-Raphson iteration. The first algorithm is exact, while the second is only asymptotically equivalent to the traditional quantile regression estimator. We also apply the preprocessing idea to the bootstrap by using the sample estimates to guess the sign of the residuals in the bootstrap sample. Simulations show that our new algorithms provide very large improvements in computation time without significant (if any) cost in the quality of the estimates. For instance, we divide by 100 the time required to estimate 99 quantile regressions with 20 regressors and 50,000 observations.

Read more

Ready to get started?

Join us today