Featured Researches

Methodology

Incompletely observed nonparametric factorial designs with repeated measurements: A wild bootstrap approach

In many life science experiments or medical studies, subjects are repeatedly observed and measurements are collected in factorial designs with multivariate data. The analysis of such multivariate data is typically based on multivariate analysis of variance (MANOVA) or mixed models, requiring complete data, and certain assumption on the underlying parametric distribution such as continuity or a specific covariance structure, e.g., compound symmetry. However, these methods are usually not applicable when discrete data or even ordered categorical data are present. In such cases, nonparametric rank-based methods that do not require stringent distributional assumptions are the preferred choice. However, in the multivariate case, most rank-based approaches have only been developed for complete observations. It is the aim of this work is to develop asymptotic correct procedures that are capable of handling missing values, allowing for singular covariance matrices and are applicable for ordinal or ordered categorical data. This is achieved by applying a wild bootstrap procedure in combination with quadratic form-type test statistics. Beyond proving their asymptotic correctness, extensive simulation studies validate their applicability for small samples. Finally, two real data examples are analyzed.

Read more
Methodology

Independent finite approximations for Bayesian nonparametric inference

Bayesian nonparametric priors based on completely random measures (CRMs) offer a flexible modeling approach when the number of latent components in a dataset is unknown. However, managing the infinite dimensionality of CRMs typically requires practitioners to derive ad-hoc algorithms, preventing the use of general-purpose inference methods and often leading to long compute times. We propose a general but explicit recipe to construct a simple finite-dimensional approximation that can replace the infinite-dimensional CRMs. Our independent finite approximation (IFA) is a generalization of important cases that are used in practice. The independence of atom weights in our approximation (i) makes the construction well-suited for parallel and distributed computation and (ii) facilitates more convenient inference schemes. We quantify the approximation error between IFAs and the target nonparametric prior. We compare IFAs with an alternative approximation scheme -- truncated finite approximations (TFAs), where the atom weights are constructed sequentially. We prove that, for worst-case choices of observation likelihoods, TFAs are a more efficient approximation than IFAs. However, in real-data experiments with image denoising and topic modeling, we find that IFAs perform very similarly to TFAs in terms of task-specific accuracy metrics.

Read more
Methodology

Inference for BART with Multinomial Outcomes

The multinomial probit Bayesian additive regression trees (MPBART) framework was proposed by Kindo et al. (KD), approximating the latent utilities in the multinomial probit (MNP) model with BART (Chipman et al. 2010). Compared to multinomial logistic models, MNP does not assume independent alternatives and the correlation structure among alternatives can be specified through multivariate Gaussian distributed latent utilities. We introduce two new algorithms for fitting the MPBART and show that the theoretical mixing rates of our proposals are equal or superior to the existing algorithm in KD. Through simulations, we explore the robustness of the methods to the choice of reference level, imbalance in outcome frequencies, and the specifications of prior hyperparameters for the utility error term. The work is motivated by the application of generating posterior predictive distributions for mortality and engagement in care among HIV-positive patients based on electronic health records (EHRs) from the Academic Model Providing Access to Healthcare (AMPATH) in Kenya. In both the application and simulations, we observe better performance using our proposals as compared to KD in terms of MCMC convergence rate and posterior predictive accuracy.

Read more
Methodology

Inference for extreme earthquake magnitudes accounting for a time-varying measurement process

Investment in measuring a process more completely or accurately is only useful if these improvements can be utilised during modelling and inference. We consider how improvements to data quality over time can be incorporated when selecting a modelling threshold and in the subsequent inference of an extreme value analysis. Motivated by earthquake catalogues, we consider variable data quality in the form of rounded and incompletely observed data. We develop an approach to select a time-varying modelling threshold that makes best use of the available data, accounting for uncertainty in the magnitude model and for the rounding of observations. We show the benefits of the proposed approach on simulated data and apply the method to a catalogue of earthquakes induced by gas extraction in the Netherlands. This more than doubles the usable catalogue size and greatly increases the precision of high magnitude quantile estimates. This has important consequences for the design and cost of earthquake defences. For the first time, we find compelling data-driven evidence against the applicability of the Gutenberg-Richer law to these earthquakes. Furthermore, our approach to automated threshold selection appears to have much potential for generic applications of extreme value methods.

Read more
Methodology

Inference of dynamic systems from noisy and sparse data via manifold-constrained Gaussian processes

Parameter estimation for nonlinear dynamic system models, represented by ordinary differential equations (ODEs), using noisy and sparse data is a vital task in many fields. We propose a fast and accurate method, MAGI (MAnifold-constrained Gaussian process Inference), for this task. MAGI uses a Gaussian process model over time-series data, explicitly conditioned on the manifold constraint that derivatives of the Gaussian process must satisfy the ODE system. By doing so, we completely bypass the need for numerical integration and achieve substantial savings in computational time. MAGI is also suitable for inference with unobserved system components, which often occur in real experiments. MAGI is distinct from existing approaches as we provide a principled statistical construction under a Bayesian framework, which incorporates the ODE system through the manifold constraint. We demonstrate the accuracy and speed of MAGI using realistic examples based on physical experiments.

Read more
Methodology

Inference on Heterogeneous Quantile Treatment Effects via Rank-Score Balancing

Understanding treatment effect heterogeneity in observational studies is of great practical importance to many scientific fields because the same treatment may affect different individuals differently. Quantile regression provides a natural framework for modeling such heterogeneity. In this paper, we propose a new method for inference on heterogeneous quantile treatment effects that incorporates high-dimensional covariates. Our estimator combines a debiased ??1 -penalized regression adjustment with a quantile-specific covariate balancing scheme. We present a comprehensive study of the theoretical properties of this estimator, including weak convergence of the heterogeneous quantile treatment effect process to the sum of two independent, centered Gaussian processes. We illustrate the finite-sample performance of our approach through Monte Carlo experiments and an empirical example, dealing with the differential effect of mothers' education on infant birth weights.

Read more
Methodology

Informative Goodness-of-Fit for Multivariate Distributions

This article introduces an informative goodness-of-fit (iGOF) approach to study multivariate distributions. When the null model is rejected, iGOF allows us to identify the underlying sources of mismodeling and naturally equips practitioners with additional insights on the nature of the deviations from the true distribution. The informative character of the procedure is achieved by exploiting smooth tests and random fields theory to facilitate the analysis of multivariate data. Simulation studies show that iGOF enjoys high power for different types of alternatives. The methods presented here directly address the problem of background mismodeling arising in physics and astronomy. It is in these areas that the motivation of this work is rooted.

Read more
Methodology

Integrative Learning for Population of Dynamic Networks with Covariates

Although there is a rapidly growing literature on dynamic connectivity methods, the primary focus has been on separate network estimation for each individual, which fails to leverage common patterns of information. We propose novel graph-theoretic approaches for estimating a population of dynamic networks that are able to borrow information across multiple heterogeneous samples in an unsupervised manner and guided by covariate information. Specifically, we develop a Bayesian product mixture model that imposes independent mixture priors at each time scan and uses covariates to model the mixture weights, which results in time-varying clusters of samples designed to pool information. The computation is carried out using an efficient Expectation-Maximization algorithm. Extensive simulation studies illustrate sharp gains in recovering the true dynamic network over existing dynamic connectivity methods. An analysis of fMRI block task data with behavioral interventions reveal sub-groups of individuals having similar dynamic connectivity, and identifies intervention-related dynamic network changes that are concentrated in biologically interpretable brain regions. In contrast, existing dynamic connectivity approaches are able to detect minimal or no changes in connectivity over time, which seems biologically unrealistic and highlights the challenges resulting from the inability to systematically borrow information across samples.

Read more
Methodology

Interaction between two exposures: determining odds ratios and confidence intervals for risk estimates

In epidemiological research, it is common to investigate the interaction between risk factors for an outcome such as a disease and hence to estimate the risk associated with being exposed for either or both of two risk factors under investigation. Interactions can be estimated both on the additive and multiplicative scale using the same regression model. We here present a review for calculating interaction and estimating the risk and confidence interval of two exposures using a single regression model and the relationship between measures, particularly the standard error for the combined exposure risk group.

Read more
Methodology

Interactive identification of individuals with positive treatment effect while controlling false discoveries

Out of the participants in a randomized experiment with anticipated heterogeneous treatment effects, is it possible to identify which ones have a positive treatment effect, even though each has only taken either treatment or control but not both? While subgroup analysis has received attention, claims about individual participants are more challenging. We frame the problem in terms of multiple hypothesis testing: we think of each individual as a null hypothesis (the potential outcomes are equal, for example) and aim to identify individuals for whom the null is false (the treatment potential outcome stochastically dominates the control, for example). We develop a novel algorithm that identifies such a subset, with nonasymptotic control of the false discovery rate (FDR). Our algorithm allows for interaction -- a human data scientist (or a computer program acting on the human's behalf) may adaptively guide the algorithm in a data-dependent manner to gain high identification power. We also propose several extensions: (a) relaxing the null to nonpositive effects, (b) moving from unpaired to paired samples, and (c) subgroup identification. We demonstrate via numerical experiments and theoretical analysis that the proposed method has valid FDR control in finite samples and reasonably high identification power.

Read more

Ready to get started?

Join us today