Featured Researches

Methodology

Designing experiments for estimating an appropriate outlet size for a silo type problem

The problem of jam formation during the discharge by gravity of granular material through a two-dimensional silo has a number of practical applications. In many problems the estimation of the minimum outlet size which guarantees that the time to the next jamming event is long enough is crucial. Assuming that the time is modeled by an exponential distribution with two unknown parameters, this goal translates to the optimal estimation of a non-linear transformation of the parameters. We obtain c -optimum experimental designs with that purpose, applying the graphic Elfving method. Since the optimal designs depend on the nominal values of the parameters, a sensitivity study is additionally provided. Finally, a simulation study checks the performance of the approximations made, first with the Fisher Information matrix, then with the linearization of the function to be estimated. The results are useful for experimenting in a laboratory and translating then the results to a larger scenario. Apart from the application a general methodology is developed in the paper for the problem of precise estimation of a one-dimensional parametric transformation in a non-linear model.

Read more
Methodology

Detecting differentially methylated regions in bisulfite sequencing data using quasi-binomial mixed models with smooth covariate effect estimates

Identifying disease-associated changes in DNA methylation can help to gain a better understanding of disease etiology. Bisulfite sequencing technology allows the generation of methylation profiles at single base of DNA. We previously developed a method for estimating smooth covariate effects and identifying differentially methylated regions (DMRs) from bisulfite sequencing data, which copes with experimental errors and variable read depths; this method utilizes the binomial distribution to characterize the variability in the methylated counts. However, bisulfite sequencing data frequently include low-count integers and can exhibit over or under dispersion relative to the binomial distribution. We present a substantial improvement to our previous work by proposing a quasi-likelihood-based regional testing approach which accounts for multiplicative and additive sources of dispersion. We demonstrate the theoretical properties of the resulting tests, as well as their marginal and conditional interpretations. Simulations show that the proposed method provides correct inference for smooth covariate effects and captures the major methylation patterns with excellent power.

Read more
Methodology

Detection of Change Points in Piecewise Polynomial Signals Using Trend Filtering

While many approaches have been proposed for discovering abrupt changes in piecewise constant signals, few methods are available to capture these changes in piecewise polynomial signals. In this paper, we propose a change point detection method, PRUTF, based on trend filtering. By providing a comprehensive dual solution path for trend filtering, PRUTF allows us to discover change points of the underlying signal for either a given value of the regularization parameter or a specific number of steps of the algorithm. We demonstrate that the dual solution path constitutes a Gaussian bridge process that enables us to derive an exact and efficient stopping rule for terminating the search algorithm. We also prove that the estimates produced by this algorithm are asymptotically consistent in pattern recovery. This result holds even in the case of staircases (consecutive change points of the same sign) in the signal. Finally, we investigate the performance of our proposed method for various signals and then compare its performance against some state-of-the-art methods in the context of change point detection. We apply our method to three real-world datasets including the UK House Price Index (HPI), the GISS surface Temperature Analysis (GISTEMP) and the Coronavirus disease (COVID-19) pandemic.

Read more
Methodology

Diagnostic tools for a multivariate negative binomial model for fitting correlated data with overdispersion

We focus on the development of diagnostic tools and an R package called MNB for a multivariate negative binomial (MNB) regression model for detecting atypical and influential subjects. The MNB model is deduced from a Poisson mixed model in which the random intercept follows the generalized log-gamma (GLG) distribution. The MNB model for correlated count data leads to an MNB regression model that inherits the features of a hierarchical model to accommodate the intraclass correlation and the occurrence of overdispersion simultaneously. The asymptotic consistency of the dispersion parameter estimator depends on the asymmetry of the GLG distribution. Inferential procedures for the MNB regression model are simple, although it can provide inconsistent estimates of the asymptotic variance when the correlation structure is misspecified. We propose the randomized quantile residual for checking the adequacy of the multivariate model, and derive global and local influence measures from the multivariate model to assess influential subjects. Finally, two applications are presented in the data analysis section. The code for installing the MNB package and the code used in the two examples is exhibited in the Appendix.

Read more
Methodology

Diagnostics for Conditional Density Models and Bayesian Inference Algorithms

There has been growing interest in the AI community for precise uncertainty quantification. Conditional density models f(y|x), where x represents potentially high-dimensional features, are an integral part of uncertainty quantification in prediction and Bayesian inference. However, it is challenging to assess conditional density estimates and gain insight into modes of failure. While existing diagnostic tools can determine whether an approximated conditional density is compatible overall with a data sample, they lack a principled framework for identifying, locating, and interpreting the nature of statistically significant discrepancies over the entire feature space. In this paper, we present rigorous and easy-to-interpret diagnostics such as (i) the "Local Coverage Test" (LCT), which distinguishes an arbitrarily misspecified model from the true conditional density of the sample, and (ii) "Amortized Local P-P plots" (ALP) which can quickly provide interpretable graphical summaries of distributional differences at any location x in the feature space. Our validation procedures scale to high dimensions and can potentially adapt to any type of data at hand. We demonstrate the effectiveness of LCT and ALP through a simulated experiment and applications to prediction and parameter inference for image data.

Read more
Methodology

Directional quantile classifiers

We introduce classifiers based on directional quantiles. We derive theoretical results for selecting optimal quantile levels given a direction, and, conversely, an optimal direction given a quantile level. We also show that the misclassification rate is infinitesimal if population distributions differ by at most a location shift and if the number of directions is allowed to diverge at the same rate of the problem's dimension. We illustrate the satisfactory performance of our proposed classifiers in both small and high dimensional settings via a simulation study and a real data example. The code implementing the proposed methods is publicly available in the R package Qtools.

Read more
Methodology

Distributed Bootstrap for Simultaneous Inference Under High Dimensionality

We propose a distributed bootstrap method for simultaneous inference on high-dimensional massive data that are stored and processed with many machines. The method produces a ????-norm confidence region based on a communication-efficient de-biased lasso, and we propose an efficient cross-validation approach to tune the method at every iteration. We theoretically prove a lower bound on the number of communication rounds ? min that warrants the statistical accuracy and efficiency. Furthermore, ? min only increases logarithmically with the number of workers and intrinsic dimensionality, while nearly invariant to the nominal dimensionality. We test our theory by extensive simulation studies, and a variable screening task on a semi-synthetic dataset based on the US Airline On-time Performance dataset. The code to reproduce the numerical results is available at GitHub: this https URL.

Read more
Methodology

Distributed Community Detection for Large Scale Networks Using Stochastic Block Model

With rapid developments of information and technology, large scale network data are ubiquitous. In this work we develop a distributed spectral clustering algorithm for community detection in large scale networks. To handle the problem, we distribute l pilot network nodes on the master server and the others on worker servers. A spectral clustering algorithm is first conducted on the master to select pseudo centers. The indexes of the pseudo centers are then broadcasted to workers to complete distributed community detection task using a SVD type algorithm. The proposed distributed algorithm has three merits. First, the communication cost is low since only the indexes of pseudo centers are communicated. Second, no further iteration algorithm is needed on workers and hence it does not suffer from problems as initialization and non-robustness. Third, both the computational complexity and the storage requirements are much lower compared to using the whole adjacency matrix. A Python package DCD (this http URL) is developed to implement the distributed algorithm for a Spark system. Theoretical properties are provided with respect to the estimation accuracy and mis-clustering rates. Lastly, the advantages of the proposed methodology are illustrated by experiments on a variety of synthetic and empirical datasets.

Read more
Methodology

Distributional Anchor Regression

Prediction models often fail if train and test data do not stem from the same distribution. Out-of-distribution (OOD) generalization to unseen, perturbed test data is a desirable but difficult-to-achieve property for prediction models and in general requires strong assumptions on the data generating process (DGP). In a causally inspired perspective on OOD generalization, the test data arise from a specific class of interventions on exogenous random variables of the DGP, called anchors. Anchor regression models, introduced by Rothenhäusler et al. (2018), protect against distributional shifts in the test data by employing causal regularization. However, so far anchor regression has only been used with a squared-error loss which is inapplicable to common responses such as censored continuous or ordinal data. Here, we propose a distributional version of anchor regression which generalizes the method to potentially censored responses with at least an ordered sample space. To this end, we combine a flexible class of parametric transformation models for distributional regression with an appropriate causal regularizer under a more general notion of residuals. In an exemplary application and several simulation scenarios we demonstrate the extent to which OOD generalization is possible.

Read more
Methodology

Distributional data analysis via quantile functions and its application to modelling digital biomarkers of gait in Alzheimer's Disease

With the advent of continuous health monitoring via wearable devices, users now generate their unique streams of continuous data such as minute-level physical activity or heart rate. Aggregating these streams into scalar summaries ignores the distributional nature of data and often leads to the loss of critical information. We propose to capture the distributional properties of wearable data via user-specific quantile functions that are further used in functional regression and multi-modal distributional modelling. In addition, we propose to encode user-specific distributional information with user-specific L-moments, robust rank-based analogs of traditional moments. Importantly, this L-moment encoding results in mutually consistent functional and distributional interpretation of the results of scalar-on-function regression. We also demonstrate how L-moments can be flexibly employed for analyzing joint and individual sources of variation in multi-modal distributional data. The proposed methods are illustrated in a study of association of accelerometry-derived digital gait biomarkers with Alzheimer's disease (AD) and in people with normal cognitive function. Our analysis shows that the proposed quantile-based representation results in a much higher predictive performance compared to simple distributional summaries and attains much stronger associations with clinical cognitive scales.

Read more

Ready to get started?

Join us today