Featured Researches

Methodology

Left-censored recurrent event analysis in epidemiological studies: a proposal when the number of previous episodes is unknown

Left censoring can occur with relative frequency when analysing recurrent events in epidemiological studies, especially observational ones. Concretely, the inclusion of individuals that were already at risk before the effective initiation in a cohort study, may cause the unawareness of prior episodes that have already been experienced, and this will easily lead to biased and inefficient estimates. The objective of this paper is to propose a statistical method that performs successfully in these circumstances. Our proposal is based on the use of models with specific baseline hazard, imputing the number of prior episodes when unknown, with a stratified model depending on whether the individual had or had not previously been at risk, and the use of a frailty term. The performance is examined in different scenarios through a comprehensive simulation study.The proposed method achieves notable performance even when the percentage of subjects at risk before the beginning of the follow-up is very elevated, with biases that are often under 10\% and coverages of around 95\%, sometimes somewhat conservative. If the baseline hazard is constant, it seems to be that the ``Gap Time'' approach is better; if it is not constant, the ``Counting Process'' seems to be a better choice. Because of the lack of knowledge of the prior episodes that have been experienced by a part (or all) of subjects, the use of common baseline methods is not advised. Our proposal seems to perform acceptably in the majority of the scenarios proposed, becoming an interesting alternative in this context.

Read more
Methodology

Local biplots for multi-dimensional scaling, with application to the microbiome

We present local biplots, a an extension of the classic principal components biplot to multi-dimensional scaling. Noticing that principal components biplots have an interpretation as the Jacobian of a map from data space to the principal subspace, we define local biplots as the Jacobian of the analogous map for multi-dimensional scaling. In the process, we show a close relationship between our local biplot axes, generalized Euclidean distances, and generalized principal components. In simulations and real data we show how local biplots can shed light on what variables or combinations of variables are important for the low-dimensional embedding provided by multi-dimensional scaling. They give particular insight into a class of phylogenetically-informed distances commonly used in the analysis of microbiome data, showing that different variants of these distances can be interpreted as implicitly smoothing the data along the phylogenetic tree and that the extent of this smoothing is variable.

Read more
Methodology

Local linear tie-breaker designs

Tie-breaker experimental designs are hybrids of Randomized Control Trials (RCTs) and Regression Discontinuity Designs (RDDs) in which subjects with moderate scores are placed in an RCT while subjects with extreme scores are deterministically assigned to the treatment or control group. The design maintains the benefits of randomization for causal estimation while avoiding the possibility of excluding the most deserving recipients from the treatment group. The causal effect estimator for a tie-breaker design can be estimated by fitting local linear regressions for both the treatment and control group, as is typically done for RDDs. We study the statistical efficiency of such local linear regression-based causal estimators as a function of ? , the radius of the interval in which treatment randomization occurs. In particular, we determine the efficiency of the estimator as a function of ? for a fixed, arbitrary bandwidth under the assumption of a uniform assignment variable. To generalize beyond uniform assignment variables and asymptotic regimes, we also demonstrate on the Angrist and Lavy (1999) classroom size dataset that prior to conducting an experiment, an experimental designer can estimate the efficiency for various experimental radii choices by using Monte Carlo as long as they have access to the distribution of the assignment variable. For both uniform and triangular kernels, we show that increasing the radius of randomized experiment interval will increase the efficiency until the radius is the size of the local-linear regression bandwidth, after which no additional efficiency benefits are conferred.

Read more
Methodology

Logistic Normal Multinomial Factor Analyzers for Clustering Microbiome Data

The human microbiome plays an important role in human health and disease status. Next generating sequencing technologies allow for quantifying the composition of the human microbiome. Clustering these microbiome data can provide valuable information by identifying underlying patterns across samples. Recently, Fang and Subedi (2020) proposed a logistic normal multinomial mixture model (LNM-MM) for clustering microbiome data. As microbiome data tends to be high dimensional, here, we develop a family of logistic normal multinomial factor analyzers (LNM-FA) by incorporating a factor analyzer structure in the LNM-MM. This family of models is more suitable for high-dimensional data as the number of parameters in LNM-FA can be greatly reduced by assuming that the number of latent factors is small. Parameter estimation is done using a computationally efficient variant of the alternating expectation conditional maximization algorithm that utilizes variational Gaussian approximations. The proposed method is illustrated using simulated and real datasets.

Read more
Methodology

Low incidence rate of COVID-19 undermines confidence in estimation of the vaccine efficacy

Knowing the true effect size of clinical interventions in randomised clinical trials is key to informing the public health policies. Vaccine efficacy is defined in terms of the relative risk or the ratio of two disease risks. However, only approximate methods are available for estimating the variance of the relative risk. In this article, we show using a probabilistic model that uncertainty in the efficacy rate could be underestimated when the disease risk is low. Factoring in the baseline rate of the disease, we estimate broader confidence intervals for the efficacy rates of the vaccines recently developed for COVID-19. We propose new confidence intervals for the relative risk. We further show that sample sizes required for phase 3 efficacy trials are routinely underestimated and propose a new method for sample size calculation where the efficacy is of interest. We also discuss the deleterious effects of classification bias which is particularly relevant at low disease prevalence.

Read more
Methodology

Low-Rank Covariance Function Estimation for Multidimensional Functional Data

Multidimensional function data arise from many fields nowadays. The covariance function plays an important role in the analysis of such increasingly common data. In this paper, we propose a novel nonparametric covariance function estimation approach under the framework of reproducing kernel Hilbert spaces (RKHS) that can handle both sparse and dense functional data. We extend multilinear rank structures for (finite-dimensional) tensors to functions, which allow for flexible modeling of both covariance operators and marginal structures. The proposed framework can guarantee that the resulting estimator is automatically semi-positive definite, and can incorporate various spectral regularizations. The trace-norm regularization in particular can promote low ranks for both covariance operator and marginal structures. Despite the lack of a closed form, under mild assumptions, the proposed estimator can achieve unified theoretical results that hold for any relative magnitudes between the sample size and the number of observations per sample field, and the rate of convergence reveals the "phase-transition" phenomenon from sparse to dense functional data. Based on a new representer theorem, an ADMM algorithm is developed for the trace-norm regularization. The appealing numerical performance of the proposed estimator is demonstrated by a simulation study and the analysis of a dataset from the Argo project.

Read more
Methodology

Lévy Adaptive B-spline Regression via Overcomplete Systems

The estimation of functions with varying degrees of smoothness is a challenging problem in the nonparametric function estimation. In this paper, we propose the LABS (Lévy Adaptive B-Spline regression) model, an extension of the LARK models, for the estimation of functions with varying degrees of smoothness. LABS model is a LARK with B-spline bases as generating kernels. The B-spline basis consists of piecewise k degree polynomials with k-1 continuous derivatives and can express systematically functions with varying degrees of smoothness. By changing the orders of the B-spline basis, LABS can systematically adapt the smoothness of functions, i.e., jump discontinuities, sharp peaks, etc. Results of simulation studies and real data examples support that this model catches not only smooth areas but also jumps and sharp peaks of functions. The proposed model also has the best performance in almost all examples. Finally, we provide theoretical results that the mean function for the LABS model belongs to the certain Besov spaces based on the orders of the B-spline basis and that the prior of the model has the full support on the Besov spaces.

Read more
Methodology

Manifold-adaptive dimension estimation revisited

Data dimensionality informs us about data complexity and sets limit on the structure of successful signal processing pipelines. In this work we revisit and improve the manifold-adaptive Farahmand-Szepesvári-Audibert (FSA) dimension estimator, making it one of the best nearest neighbor-based dimension estimators available. We compute the probability density function of local FSA estimates, if the local manifold density is uniform. Based on the probability density function, we propose to use the median of local estimates as a basic global measure of intrinsic dimensionality, and we demonstrate the advantages of this asymptotically unbiased estimator over the previously proposed statistics: the mode and the mean. Additionally, from the probability density function, we derive the maximum likelihood formula for global intrinsic dimensionality, if i.i.d. holds. We tackle edge and finite-sample effects with an exponential correction formula, calibrated on hypercube datasets. We compare the performance of the corrected-median-FSA estimator with kNN estimators: maximum likelihood (ML, Levina-Bickel) and two implementations of DANCo (R and matlab). We show that corrected-median-FSA estimator beats the ML estimator and it is on equal footing with DANCo for standard synthetic benchmarks according to mean percentage error and error rate metrics. With the median-FSA algorithm, we reveal diverse changes in the neural dynamics while resting state and during epileptic seizures. We identify brain areas with lower-dimensional dynamics that are possible causal sources and candidates for being seizure onset zones.

Read more
Methodology

Marginal modeling of cluster-period means and intraclass correlations in stepped wedge designs with binary outcomes

Stepped wedge cluster randomized trials (SW-CRTs) with binary outcomes are increasingly used in prevention and implementation studies. Marginal models represent a flexible tool for analyzing SW-CRTs with population-averaged interpretations, but the joint estimation of the mean and intraclass correlation coefficients (ICCs) can be computationally intensive due to large cluster-period sizes. Motivated by the need for marginal inference in SW-CRTs, we propose a simple and efficient estimating equations approach to analyze cluster-period means. We show that the quasi-score for the marginal mean defined from individual-level observations can be reformulated as the quasi-score for the same marginal mean defined from the cluster-period means. An additional mapping of the individual-level ICCs into correlations for the cluster-period means further provides a rigorous justification for the cluster-period approach. The proposed approach addresses a long-recognized computational burden associated with estimating equations defined based on individual-level observations, and enables fast point and interval estimation of the intervention effect and correlations. We further propose matrix-adjusted estimating equations to improve the finite-sample inference for ICCs. By providing a valid approach to estimate ICCs within the class of generalized linear models for correlated binary outcomes, this article operationalizes key recommendations from the CONSORT extension to SW-CRTs, including the reporting of ICCs.

Read more
Methodology

MatchThem:: Matching and Weighting after Multiple Imputation

Balancing the distributions of the confounders across the exposure levels in an observational study through matching or weighting is an accepted method to control for confounding due to these variables when estimating the association between an exposure and outcome and to reduce the degree of dependence on certain modeling assumptions. Despite the increasing popularity in practice, these procedures cannot be immediately applied to datasets with missing values. Multiple imputation of the missing data is a popular approach to account for missing values while preserving the number of units in the dataset and accounting for the uncertainty in the missing values. However, to the best of our knowledge, there is no comprehensive matching and weighting software that can be easily implemented with multiply imputed datasets. In this paper, we review this problem and suggest a framework to map out the matching and weighting multiply imputed datasets to 5 actions as well as the best practices to assess balance in these datasets after matching and weighting. We also illustrate these approaches using a companion package for R, MatchThem.

Read more

Ready to get started?

Join us today