Featured Researches

Methodology

Finding Stable Groups of Cross-Correlated Features in Multi-View data

Multi-view data, in which data of different types are obtained from a common set of samples, is now common in many scientific problems. An important problem in the analysis of multi-view data is identifying interactions between groups of features from different data types. A bimodule is a pair (A,B) of feature sets from two different data types such that the aggregate cross-correlation between the features in A and those in B is large. A bimodule (A,B) is stable if A coincides with the set of features having significant aggregate correlation with the features in B , and vice-versa. At the population level, stable bimodules correspond to connected components of the cross-correlation network, which is the bipartite graph whose edges are pairs of features with non-zero cross-correlations. We develop an iterative, testing-based procedure, called BSP, to identify stable bimodules in two moderate- to high-dimensional data sets. BSP relies on permutation-based p-values for sums of squared cross-correlations. We efficiently approximate the p-values using tail probabilities of gamma distributions that are fit using analytical estimates of the permutation moments of the test statistic. Our moment estimates depend on the eigenvalues of the intra-correlation matrices of A and B and as a result, the significance of observed cross-correlations accounts for the correlations within each data type. We carry out a thorough simulation study to assess the performance of BSP, and present an extended application of BSP to the problem of expression quantitative trait loci (eQTL) analysis using recent data from the GTEx project. In addition, we apply BSP to climatology data in order to identify regions in North America where annual temperature variation affects precipitation.

Read more
Methodology

Finite mixture modeling of censored and missing data using the multivariate skew-normal distribution

Finite mixture models have been widely used to model and analyze data from a heterogeneous populations. Moreover, data of this kind can be missing or subject to some upper and/or lower detection limits because of the restriction of experimental apparatuses. Another complication arises when measures of each population depart significantly from normality, for instance, asymmetric behavior. For such data structures, we propose a robust model for censored and/or missing data based on finite mixtures of multivariate skew-normal distributions. This approach allows us to model data with great flexibility, accommodating multimodality and skewness, simultaneously, depending on the structure of the mixture components. We develop an analytically simple, yet efficient, EM- type algorithm for conducting maximum likelihood estimation of the parameters. The algorithm has closed-form expressions at the E-step that rely on formulas for the mean and variance of the truncated multivariate skew-normal distributions. Furthermore, a general information-based method for approximating the asymptotic covariance matrix of the estimators is also presented. Results obtained from the analysis of both simulated and real datasets are reported to demonstrate the effectiveness of the proposed method. The proposed algorithm and method are implemented in the new R package CensMFM.

Read more
Methodology

Firth's logistic regression with rare events: accurate effect estimates AND predictions?

Firth-type logistic regression has become a standard approach for the analysis of binary outcomes with small samples. Whereas it reduces the bias in maximum likelihood estimates of coefficients, bias towards 1/2 is introduced in the predicted probabilities. The stronger the imbalance of the outcome, the more severe is the bias in the predicted probabilities. We propose two simple modifications of Firth-type logistic regression resulting in unbiased predicted probabilities. The first corrects the predicted probabilities by a post-hoc adjustment of the intercept. The other is based on an alternative formulation of Firth-types estimation as an iterative data augmentation procedure. Our suggested modification consists in introducing an indicator variable which distinguishes between original and pseudo observations in the augmented data. In a comprehensive simulation study these approaches are compared to other attempts to improve predictions based on Firth-type penalization and to other published penalization strategies intended for routine use. For instance, we consider a recently suggested compromise between maximum likelihood and Firth-type logistic regression. Simulation results are scrutinized both with regard to prediction and regression coefficients. Finally, the methods considered are illustrated and compared for a study on arterial closure devices in minimally invasive cardiac surgery.

Read more
Methodology

Fisher Scoring for crossed factor Linear Mixed Models

The analysis of longitudinal, heterogeneous or unbalanced clustered data is of primary importance to a wide range of applications. The Linear Mixed Model (LMM) is a popular and flexible extension of the linear model specifically designed for such purposes. Historically, a large proportion of material published on the LMM concerns the application of popular numerical optimization algorithms, such as Newton-Raphson, Fisher Scoring and Expectation Maximization to single-factor LMMs (i.e. LMMs that only contain one "factor" by which observations are grouped). However, in recent years, the focus of the LMM literature has moved towards the development of estimation and inference methods for more complex, multi-factored designs. In this paper, we present and derive new expressions for the extension of an algorithm classically used for single-factor LMM parameter estimation, Fisher Scoring, to multiple, crossed-factor designs. Through simulation and real data examples, we compare five variants of the Fisher Scoring algorithm with one another, as well as against a baseline established by the R package lmer, and find evidence of correctness and strong computational efficiency for four of the five proposed approaches. Additionally, we provide a new method for LMM Satterthwaite degrees of freedom estimation based on analytical results, which does not require iterative gradient estimation. Via simulation, we find that this approach produces estimates with both lower bias and lower variance than the existing methods.

Read more
Methodology

Fisher transformation based Confidence Intervals of Correlations in Fixed- and Random-Effects Meta-Analysis

Meta-analyses of correlation coefficients are an important technique to integrate results from many cross-sectional and longitudinal research designs. Uncertainty in pooled estimates is typically assessed with the help of confidence intervals, which can double as hypothesis tests for two-sided hypotheses about the underlying correlation. A standard approach to construct confidence intervals for the main effect is the Hedges-Olkin-Vevea Fisher-z (HOVz) approach, which is based on the Fisher-z transformation. Results from previous studies (Field, 2005; Hafdahl and Williams, 2009), however, indicate that in random-effects models the performance of the HOVz confidence interval can be unsatisfactory. To this end, we propose improvements of the HOVz approach, which are based on enhanced variance estimators for the main effect estimate. In order to study the coverage of the new confidence intervals in both fixed- and random-effects meta-analysis models, we perform an extensive simulation study, comparing them to established approaches. Data were generated via a truncated normal and beta distribution model. The results show that our newly proposed confidence intervals based on a Knapp-Hartung-type variance estimator or robust heteroscedasticity consistent sandwich estimators in combination with the integral z-to-r transformation (Hafdahl, 2009) provide more accurate coverage than existing approaches in most scenarios, especially in the more appropriate beta distribution simulation model.

Read more
Methodology

Flexible Validity Conditions for the Multivariate Matérn Covariance in any Spatial Dimension and for any Number of Components

Flexible multivariate covariance models for spatial data are on demand. This paper addresses the problem of parametric constraints for positive semidefiniteness of the multivariate Mat{é}rn model. Much attention has been given to the bivariate case, while highly multivariate cases have been explored to a limited extent only. The existing conditions often imply severe restrictions on the upper bounds for the collocated correlation coefficients, which makes the multivariate Mat{é}rn model appealing for the case of weak spatial cross-dependence only. We provide a collection of validity conditions for the multivariate Mat{é}rn covariance model that allows for more flexible parameterizations than those currently available. We also prove that, in several cases, we can attain much higher upper bounds for the collocated correlation coefficients in comparison with our competitors. We conclude with a simple illustration on a trivariate geochemical dataset and show that our enlarged parametric space allows for better fitting performance with respect to our competitors.

Read more
Methodology

Flexible estimation of the state dwell-time distribution in hidden semi-Markov models

Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.

Read more
Methodology

Fully Bayesian Estimation under Dependent and Informative Cluster Sampling

Survey data are often collected under multistage sampling designs where units are binned to clusters that are sampled in a first stage. The unit-indexed population variables of interest are typically dependent within cluster. We propose a Fully Bayesian method that constructs an exact likelihood for the observed sample to incorporate unit-level marginal sampling weights for performing unbiased inference for population parameters while simultaneously accounting for the dependence induced by sampling clusters of units to produce correct uncertainty quantification. Our approach parameterizes cluster-indexed random effects in both a marginal model for the response and a conditional model for published, unit-level sampling weights. We compare our method to plug-in Bayesian and frequentist alternatives in a simulation study and demonstrate that our method most closely achieves correct uncertainty quantification for model parameters, including the generating variances for cluster-indexed random effects. We demonstrate our method in two applications with NHANES data.

Read more
Methodology

Functional random effects modeling of brain shape and connectivity

We present a statistical framework that jointly models brain shape and functional connectivity, which are two complex aspects of the brain that have been classically studied independently. We adopt a Riemannian modeling approach to account for the non-Euclidean geometry of the space of shapes and the space of connectivity that constrains trajectories of co-variation to be valid statistical estimates. In order to disentangle genetic sources of variability from those driven by unique environmental factors, we embed a functional random effects model in the Riemannian framework. We apply the proposed model to the Human Connectome Project dataset to explore spontaneous co-variation between brain shape and connectivity in young healthy individuals.

Read more
Methodology

G-Formula for Observational Studies with Partial Interference, with Application to Bed Net Use on Malaria

Assessing population-level effects of vaccines and other infectious disease prevention measures is important to the field of public health. In infectious disease studies, one person's treatment may affect another individual's outcome, i.e., there may be interference between units. For example, use of bed nets to prevent malaria by one individual may have an indirect or spillover effect to other individuals living in close proximity. In some settings, individuals may form groups or clusters where interference only occurs within groups, i.e., there is partial interference. Inverse probability weighted estimators have previously been developed for observational studies with partial interference. Unfortunately, these estimators are not well suited for studies with large clusters. Therefore, in this paper, the parametric g-formula is extended to allow for partial interference. G-formula estimators are proposed of overall effects, spillover effects when treated, and spillover effects when untreated. The proposed estimators can accommodate large clusters and do not suffer from the g-null paradox that may occur in the absence of interference. The large sample properties of the proposed estimators are derived, and simulation studies are presented demonstrating the finite-sample performance of the proposed estimators. The Demographic and Health Survey from the Democratic Republic of the Congo is then analyzed using the proposed g-formula estimators to assess the overall and spillover effects of bed net use on malaria.

Read more

Ready to get started?

Join us today