Featured Researches

Methodology

Exact Multivariate Two-Sample Density-Based Empirical Likelihood Ratio Tests Applicable to Retrospective and Group Sequential Studies

Nonparametric tests for equality of multivariate distributions are frequently desired in research. It is commonly required that test-procedures based on relatively small samples of vectors accurately control the corresponding Type I Error (TIE) rates. Often, in the multivariate testing, extensions of null-distribution-free univariate methods, e.g., Kolmogorov-Smirnov and Cramer-von Mises type schemes, are not exact, since their null distributions depend on underlying data distributions. The present paper extends the density-based empirical likelihood technique in order to nonparametrically approximate the most powerful test for the multivariate two-sample (MTS) problem, yielding an exact finite-sample test statistic. We rigorously establish and apply one-to-one-mapping between the equality of vectors distributions and the equality of distributions of relevant univariate linear projections. In this framework, we prove an algorithm that simplifies the use of projection pursuit, employing only a few of the infinitely many linear combinations of observed vectors components. The displayed distribution-free strategy is employed in retrospective and group sequential manners. The asymptotic consistency of the proposed technique is shown. Monte Carlo studies demonstrate that the proposed procedures exhibit extremely high and stable power characteristics across a variety of settings. Supplementary materials for this article are available online.

Read more
Methodology

Experimentation for Homogenous Policy Change

When the Stable Unit Treatment Value Assumption (SUTVA) is violated and there is interference among units, there is not a uniquely defined Average Treatment Effect (ATE), and alternative estimands may be of interest, among them average unit-level differences in outcomes under different homogeneous treatment policies. We term this target the Homogeneous Assignment Average Treatment Effect (HAATE). We consider approaches to experimental design with multiple treatment conditions under partial interference and, given the estimand of interest, we show that difference-in-means estimators may perform better than correctly specified regression models in finite samples on root mean squared error (RMSE). With errors correlated at the cluster level, we demonstrate that two-stage randomization procedures with intra-cluster correlation of treatment strictly between zero and one may dominate one-stage randomization designs on the same metric. Simulations demonstrate performance of this approach; an application to online experiments at Facebook is discussed.

Read more
Methodology

Explaining predictive models using Shapley values and non-parametric vine copulas

The original development of Shapley values for prediction explanation relied on the assumption that the features being described were independent. If the features in reality are dependent this may lead to incorrect explanations. Hence, there have recently been attempts of appropriately modelling/estimating the dependence between the features. Although the proposed methods clearly outperform the traditional approach assuming independence, they have their weaknesses. In this paper we propose two new approaches for modelling the dependence between the features. Both approaches are based on vine copulas, which are flexible tools for modelling multivariate non-Gaussian distributions able to characterise a wide range of complex dependencies. The performance of the proposed methods is evaluated on simulated data sets and a real data set. The experiments demonstrate that the vine copula approaches give more accurate approximations to the true Shapley values than its competitors.

Read more
Methodology

Exploring the space-time pattern of log-transformed infectious count of COVID-19: a clustering-segmented autoregressive sigmoid model

At the end of April 20, 2020, there were only a few new COVID-19 cases remaining in China, whereas the rest of the world had shown increases in the number of new cases. It is of extreme importance to develop an efficient statistical model of COVID-19 spread, which could help in the global fight against the virus. We propose a clustering-segmented autoregressive sigmoid (CSAS) model to explore the space-time pattern of the log-transformed infectious count. Four key characteristics are included in this CSAS model, including unknown clusters, change points, stretched S-curves, and autoregressive terms, in order to understand how this outbreak is spreading in time and in space, to understand how the spread is affected by epidemic control strategies, and to apply the model to updated data from an extended period of time. We propose a nonparametric graph-based clustering method for discovering dissimilarity of the curve time series in space, which is justified with theoretical support to demonstrate how the model works under mild and easily verified conditions. We propose a very strict purity score that penalizes overestimation of clusters. Simulations show that our nonparametric graph-based clustering method is faster and more accurate than the parametric clustering method regardless of the size of data sets. We provide a Bayesian information criterion (BIC) to identify multiple change points and calculate a confidence interval for a mean response. By applying the CSAS model to the collected data, we can explain the differences between prevention and control policies in China and selected countries.

Read more
Methodology

Factor analysis in high dimensional biological data with dependent observations

Factor analysis is a critical component of high dimensional biological data analysis. However, modern biological data contain two key features that irrevocably corrupt existing methods. First, these data, which include longitudinal, multi-treatment and multi-tissue data, contain samples that break critical independence requirements necessary for the utilization of prevailing methods. Second, biological data contain factors with large, moderate and small signal strengths, and therefore violate the ubiquitous "pervasive factor" assumption essential to the performance of many methods. In this work, I develop a novel statistical framework to perform factor analysis and interpret its results in data with dependent observations and factors whose signal strengths span several orders of magnitude. I then prove that my methodology can be used to solve many important and previously unsolved problems that routinely arise when analyzing dependent biological data, including high dimensional covariance estimation, subspace recovery, latent factor interpretation and data denoising. Additionally, I show that my estimator for the number of factors overcomes both the notorious "eigenvalue shadowing" problem, as well as the biases due to the pervasive factor assumption that plague existing estimators. Simulated and real data demonstrate the superior performance of my methodology in practice.

Read more
Methodology

Factor-augmented Smoothing Model for Functional Data

We propose modeling raw functional data as a mixture of a smooth function and a highdimensional factor component. The conventional approach to retrieving the smooth function from the raw data is through various smoothing techniques. However, the smoothing model is not adequate to recover the smooth curve or capture the data variation in some situations. These include cases where there is a large amount of measurement error, the smoothing basis functions are incorrectly identified, or the step jumps in the functional mean levels are neglected. To address these challenges, a factor-augmented smoothing model is proposed, and an iterative numerical estimation approach is implemented in practice. Including the factor model component in the proposed method solves the aforementioned problems since a few common factors often drive the variation that cannot be captured by the smoothing model. Asymptotic theorems are also established to demonstrate the effects of including factor structures on the smoothing results. Specifically, we show that the smoothing coefficients projected on the complement space of the factor loading matrix is asymptotically normal. As a byproduct of independent interest, an estimator for the population covariance matrix of the raw data is presented based on the proposed model. Extensive simulation studies illustrate that these factor adjustments are essential in improving estimation accuracy and avoiding the curse of dimensionality. The superiority of our model is also shown in modeling Canadian weather data and Australian temperature data.

Read more
Methodology

Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds

Algorithmic fairness is a topic of increasing concern both within research communities and among the general public. Conventional fairness criteria place restrictions on the joint distribution of a sensitive feature A , an outcome Y , and a predictor S . For example, the criterion of equalized odds requires that S be conditionally independent of A given Y , or equivalently, when all three variables are binary, that the false positive and false negative rates of the predictor be the same for two levels of A . However, fairness criteria based around observable Y are misleading when applied to Risk Assessment Instruments (RAIs), such as predictors designed to estimate the risk of recidivism or child neglect. It has been argued instead that RAIs ought to be trained and evaluated with respect to potential outcomes Y 0 . Here, Y 0 represents the outcome that would be observed under no intervention--for example, whether recidivism would occur if a defendant were to be released pretrial. In this paper, we develop a method to post-process an existing binary predictor to satisfy approximate counterfactual equalized odds, which requires S to be nearly conditionally independent of A given Y 0 , within a tolerance specified by the user. Our predictor converges to an optimal fair predictor at n − − √ rates under appropriate assumptions. We propose doubly robust estimators of the risk and fairness properties of a fixed post-processed predictor, and we show that they are n − − √ -consistent and asymptotically normal under appropriate assumptions.

Read more
Methodology

Fast and frugal time series forecasting

Over the years, families of forecasting models, such as the exponential smoothing family and Autoregressive Integrated Moving Average, have expanded to contain multiple possible forms and forecasting profiles. In this paper, we question the need to consider such large families of models. We argue that parsimoniously identifying suitable subsets of models will not decrease the forecasting accuracy nor will it reduce the ability to estimate the forecast uncertainty. We propose a framework that balances forecasting performance versus computational cost, resulting in a set of reduced families of models and empirically demonstrate this trade-offs. We translate computational benefits to monetary cost savings and discuss the implications of our results in the context of large retailers.

Read more
Methodology

Fast calculation of Gaussian Process multiple-fold cross-validation residuals and their covariances

We generalize fast Gaussian process leave-one-out formulae to multiple-fold cross-validation, highlighting in turn in broad settings the covariance structure of cross-validation residuals. The employed approach, that relies on block matrix inversion via Schur complements, is applied to both Simple and Universal Kriging frameworks. We illustrate how resulting covariances affect model diagnostics and how to properly transform residuals in the first place. Beyond that, we examine how accounting for dependency between such residuals affect cross-validation-based estimation of the scale parameter. It is found in two distinct cases, namely in scale estimation and in broader covariance parameter estimation via pseudo-likelihood, that correcting for covariances between cross-validation residuals leads back to maximum likelihood estimation or to an original variation thereof. The proposed fast calculation of Gaussian Process multiple-fold cross-validation residuals is implemented and benchmarked against a naive implementation, all in R language. Numerical experiments highlight the accuracy of our approach as well as the substantial speed-ups that it enables. It is noticeable however, as supported by a discussion on the main drivers of computational costs and by a dedicated numerical benchmark, that speed-ups steeply decline as the number of folds (say, all sharing the same size) decreases. Overall, our results enable fast multiple-fold cross-validation, have direct consequences in GP model diagnostics, and pave the way to future work on hyperparameter fitting as well as on the promising field of goal-oriented fold design.

Read more
Methodology

Fast marginal likelihood estimation of penalties for group-adaptive elastic net

Nowadays, clinical research routinely uses omics data, such as gene expression, for predicting clinical outcomes or selecting markers. Additionally, so-called co-data are often available, providing complementary information on the covariates, like p-values from previously published studies or groups of genes corresponding to pathways. Elastic net penalisation is widely used for prediction and covariate selection. Group-adaptive elastic net penalisation learns from co-data to improve the prediction and covariate selection, by penalising important groups of covariates less than other groups. Existing methods are, however, computationally expensive. Here we present a fast method for marginal likelihood estimation of group-adaptive elastic net penalties for generalised linear models. We first derive a low-dimensional representation of the Taylor approximation of the marginal likelihood and its first derivative for group-adaptive ridge penalties, to efficiently estimate these penalties. Then we show by using asymptotic normality of the linear predictors that the marginal likelihood for elastic net models may be approximated well by the marginal likelihood for ridge models. The ridge group penalties are then transformed to elastic net group penalties by using the variance function. The method allows for overlapping groups and unpenalised variables. We demonstrate the method in a model-based simulation study and an application to cancer genomics. The method substantially decreases computation time and outperforms or matches other methods by learning from co-data.

Read more

Ready to get started?

Join us today