Featured Researches

Computation

Approximate Laplace approximations for scalable model selection

We propose the approximate Laplace approximation (ALA) to evaluate integrated likelihoods, a bottleneck in Bayesian model selection. The Laplace approximation (LA) is a popular tool that speeds up such computation and equips strong model selection properties. However, when the sample size is large or one considers many models the cost of the required optimizations becomes impractical. ALA reduces the cost to that of solving a least-squares problem for each model. Further, it enables efficient computation across models such as sharing pre-computed sufficient statistics and certain operations in matrix decompositions. We prove that in generalized (possibly non-linear) models ALA achieves a strong form of model selection consistency for a suitably-defined optimal model, at the same functional rates as exact computation. We consider fixed- and high-dimensional problems, group and hierarchical constraints, and the possibility that all models are misspecified. We also obtain ALA rates for Gaussian regression under non-local priors, an important example where the LA can be costly and does not consistently estimate the integrated likelihood. Our examples include non-linear regression, logistic, Poisson and survival models. We implement the methodology in the R package mombf.

Read more
Computation

Approximate computation of projection depths

Data depth is a concept in multivariate statistics that measures the centrality of a point in a given data cloud in $\IR^d$. If the depth of a point can be represented as the minimum of the depths with respect to all one-dimensional projections of the data, then the depth satisfies the so-called projection property. Such depths form an important class that includes many of the depths that have been proposed in literature. For depths that satisfy the projection property an approximate algorithm can easily be constructed since taking the minimum of the depths with respect to only a finite number of one-dimensional projections yields an upper bound for the depth with respect to the multivariate data. Such an algorithm is particularly useful if no exact algorithm exists or if the exact algorithm has a high computational complexity, as is the case with the halfspace depth or the projection depth. To compute these depths in high dimensions, the use of an approximate algorithm with better complexity is surely preferable. Instead of focusing on a single method we provide a comprehensive and fair comparison of several methods, both already described in the literature and original.

Read more
Computation

Approximate spectral gaps for Markov chains mixing times in high dimensions

This paper introduces a concept of approximate spectral gap to analyze the mixing time of Markov Chain Monte Carlo (MCMC) algorithms for which the usual spectral gap is degenerate or almost degenerate. We use the idea to analyze a class of MCMC algorithms to sample from mixtures of densities. As an application we study the mixing time of a Gibbs sampler for variable selection in linear regression models. Under some regularity conditions on the signal and the design matrix of the regression problem, we show that for well-chosen initial distributions the mixing time of the Gibbs sampler is polynomial in the dimension of the space.

Read more
Computation

Approximating posteriors with high-dimensional nuisance parameters via integrated rotated Gaussian approximation

Posterior computation for high-dimensional data with many parameters can be challenging. This article focuses on a new method for approximating posterior distributions of a low- to moderate-dimensional parameter in the presence of a high-dimensional or otherwise computationally challenging nuisance parameter. The focus is on regression models and the key idea is to separate the likelihood into two components through a rotation. One component involves only the nuisance parameters, which can then be integrated out using a novel type of Gaussian approximation. We provide theory on approximation accuracy that holds for a broad class of forms of the nuisance component and priors. Applying our method to simulated and real data sets shows that it can outperform state-of-the-art posterior approximation approaches.

Read more
Computation

Assessing and Visualizing Simultaneous Simulation Error

Monte Carlo experiments produce samples in order to estimate features of a given distribution. However, simultaneous estimation of means and quantiles has received little attention, despite being common practice. In this setting we establish a multivariate central limit theorem for any finite combination of sample means and quantiles under the assumption of a strongly mixing process, which includes the standard Monte Carlo and Markov chain Monte Carlo settings. We build on this to provide a fast algorithm for constructing hyperrectangular confidence regions having the desired simultaneous coverage probability and a convenient marginal interpretation. The methods are incorporated into standard ways of visualizing the results of Monte Carlo experiments enabling the practitioner to more easily assess the reliability of the results. We demonstrate the utility of this approach in various Monte Carlo settings including simulation studies based on independent and identically distributed samples and Bayesian analyses using Markov chain Monte Carlo sampling.

Read more
Computation

Assessing the accuracy of individual link with varying block sizes and cut-off values using MaCSim approach

Record linkage is the process of matching together records from different data sources that belong to the same entity. Record linkage is increasingly being used by many organizations including statistical, health, government etc. to link administrative, survey, and other files to create a robust file for more comprehensive analysis. Therefore, it becomes necessary to assess the ability of a linking method to achieve high accuracy or compare between methods with respect to accuracy. In this paper, we evaluate the accuracy of individual link using varying block sizes and different cut-off values by utilizing a Markov Chain based Monte Carlo simulation approach (MaCSim). MaCSim utilizes two linked files to create an agreement matrix. The agreement matrix is simulated to generate re-sampled versions of the agreement matrix. A defined linking method is used in each simulation to link the files and the accuracy of the linking method is assessed. The aim of this paper is to facilitate optimal choice of block size and cut-off value to achieve high accuracy in terms of minimizing average False Discovery Rate and False Negative Rate. The analyses have been performed using a synthetic dataset provided by the Australian Bureau of Statistics (ABS) and indicated promising results.

Read more
Computation

Augmented pseudo-marginal Metropolis-Hastings for partially observed diffusion processes

We consider the problem of inference for nonlinear, multivariate diffusion processes, satisfying Itô stochastic differential equations (SDEs), using data at discrete times that may be incomplete and subject to measurement error. Our starting point is a state-of-the-art correlated pseudo-marginal Metropolis-Hastings scheme, that uses correlated particle filters to induce strong and positive correlation between successive marginal likelihood estimates. However, unless the measurement error or the dimension of the SDE is small, correlation can be eroded by the resampling steps in the particle filter. We therefore propose a novel augmentation scheme, that allows for conditioning on values of the latent process at the observation times, completely avoiding the need for resampling steps. We integrate over the uncertainty at the observation times with an additional Gibbs step. Connections between the resulting pseudo-marginal scheme and existing inference schemes for diffusion processes are made. The methodology is applied in three examples of increasing complexity. We find that our approach offers substantial increases in overall efficiency, compared to competing methods.

Read more
Computation

Automatic Backward Filtering Forward Guiding for Markov processes and graphical models

We incorporate discrete and continuous time Markov processes as building blocks into probabilistic graphical models with latent and observed variables. We introduce the automatic Backward Filtering Forward Guiding (BFFG) paradigm (Mider et al., 2020) for programmable inference on latent states and model parameters. Our starting point is a generative model, a forward description of the probabilistic process dynamics. We backpropagate the information provided by observations through the model to transform the generative (forward) model into a pre-conditional model guided by the data. It approximates the actual conditional model with known likelihood-ratio between the two. The backward filter and the forward change of measure are suitable to be incorporated into a probabilistic programming context because they can be formulated as a set of transformation rules. The guided generative model can be incorporated in different approaches to efficiently sample latent states and parameters conditional on observations. We show applicability in a variety of settings, including Markov chains with discrete state space, interacting particle systems, state space models, branching diffusions and Gamma processes.

Read more
Computation

Automating Involutive MCMC using Probabilistic and Differentiable Programming

Involutive MCMC is a unifying mathematical construction for MCMC kernels that generalizes many classic and state-of-the-art MCMC algorithms, from reversible jump MCMC to kernels based on deep neural networks. But as with MCMC samplers more generally, implementing involutive MCMC kernels is often tedious and error-prone, especially when sampling on complex state spaces. This paper describes a technique for automating the implementation of involutive MCMC kernels given (i) a pair of probabilistic programs defining the target distribution and an auxiliary distribution respectively and (ii) a differentiable program that transforms the execution traces of these probabilistic programs. The technique, which is implemented as part of the Gen probabilistic programming system, also automatically detects user errors in the specification of involutive MCMC kernels and exploits sparsity in the kernels for improved efficiency. The paper shows example Gen code for a split-merge reversible jump move in an infinite Gaussian mixture model and a state-dependent mixture of proposals on a combinatorial space of covariance functions for a Gaussian process.

Read more
Computation

BAT.jl -- A Julia-based tool for Bayesian inference

We describe the development of a multi-purpose software for Bayesian statistical inference, BAT.jl, written in the Julia language. The major design considerations and implemented algorithms are summarized here, together with a test suite that ensures the proper functioning of the algorithms. We also give an extended example from the realm of physics that demonstrates the functionalities of BAT.jl.

Read more

Ready to get started?

Join us today