Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christophe Andrieu is active.

Publication


Featured researches published by Christophe Andrieu.


Statistics and Computing | 2000

On sequential Monte Carlo sampling methods for Bayesian filtering

Arnaud Doucet; Simon J. Godsill; Christophe Andrieu

In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.


Machine Learning | 2003

An Introduction to MCMC for Machine Learning

Christophe Andrieu; Nando de Freitas; Arnaud Doucet; Michael I. Jordan

This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.


IEEE Transactions on Signal Processing | 1999

Joint Bayesian model selection and estimation of noisy sinusoids via reversible jump MCMC

Christophe Andrieu; Arnaud Doucet

In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this distribution. Unfortunately, a direct evaluation of this distribution and of its features, including posterior model probabilities, requires evaluation of some complicated high-dimensional integrals. We develop an efficient stochastic algorithm based on reversible jump Markov chain Monte Carlo methods to perform the Bayesian computation. A convergence result for this algorithm is established. In simulation, it appears that the performance of detection based on posterior model probabilities outperforms conventional detection schemes.


hardware-oriented security and trust | 1999

Sequential MCMC for Bayesian model selection

Christophe Andrieu; N. De Freitas; Arnaud Doucet

In this paper, we address the problem of sequential Bayesian model selection. This problem does not usually admit any closed-form analytical solution. We propose here an original sequential simulation-based method to solve the associated Bayesian computational problems. This method combines sequential importance sampling, a resampling procedure and reversible jump MCMC (Markov chain Monte Carlo) moves. We describe a generic algorithm and then apply it to the problem of sequential Bayesian model order estimation of autoregressive (AR) time series observed in additive noise.


Signal Processing | 2001

Model selection by MCMC computation

Christophe Andrieu; Petar M. Djuric; Arnaud Doucet

MCMC sampling is a methodology that is becoming increasingly important in statistical signal processing. It has been of particular importance to the Bayesian-based approaches to signal processing since it extends significantly the range of problems that they can address. MCMC techniques generate samples from desired distributions by embedding them as limiting distributions of Markov chains. There are many ways of categorizing MCMC methods, but the simplest one is to classify them in one of two groups: the first is used in estimation problems where the unknowns are typically parameters of a model, which is assumed to have generated the observed data; the second is employed in more general scenarios where the unknowns are not only model parameters, but models as well. In this paper, we address the MCMC methods from the second group, which allow for generation of samples from probability distributions defined on unions of disjoint spaces of different dimensions. More specifically, we show why sampling from such distributions is a nontrivial task. It will be demonstrated that these methods genuinely unify the operations of detection and estimation and thereby provide great potential for various important applications. The focus is mainly on the reversible jump MCMC (Green, Biometrika 82 (1995) 711), but other approaches are also discussed. Details of implementation of the reversible jump MCMC are provided for two examples.


Neural Computation | 2001

Robust Full Bayesian Learning for Radial Basis Networks

Christophe Andrieu; Nando de Freitas; Arnaud Doucet

We propose a hierarchical full Bayesian model for radial basis networks. This model treats the model dimension (number of neurons), model parameters, regularization parameters, and noise parameters as unknown random variables. We develop a reversible-jump Markov chain Monte Carlo (MCMC) method to perform the Bayesian computation. We find that the results obtained using this method are not only better than the ones reported previously, but also appear to be robust with respect to the prior specification. In addition, we propose a novel and computationally efficient reversible-jump MCMC simulated annealing algorithm to optimize neural networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis function. It performs a global search in the joint space of the parameters and number of parameters, thereby surmounting the problem of local minima to a large extent. We show that by calibrating the full hierarchical Bayesian prior, we can obtain the classical Akaike information criterion, Bayesian information criterion, and minimum description length model selection criteria within a penalized likelihood framework. Finally, we present a geometric convergence theorem for the algorithm with homogeneous transition kernel and a convergence theorem for the reversible-jump MCMC simulated annealing method.


Annals of Applied Probability | 2015

Convergence properties of pseudo-marginal markov chain monte carlo algorithms

Christophe Andrieu; Matti Vihola

We study convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms (Andrieu and Roberts [Ann. Statist. 37 (2009) 697-725]). We find that the asymptotic variance of the pseudo-marginal algorithm is always at least as large as that of the marginal algorithm. We show that if the marginal chain admits a (right) spectral gap and the weights (normalised estimates of the target density) are uniformly bounded, then the pseudo-marginal chain has a spectral gap. In many cases, a similar result holds for the absolute spectral gap, which is equivalent to geometric ergodicity. We consider also unbounded weight distributions and recover polynomial convergence rates in more specific cases, when the marginal algorithm is uniformly ergodic or an independent Metropolis-Hastings or a random-walk Metropolis targeting a super-exponential density with regular contours. Our results on geometric and polynomial convergence rates imply central limit theorems. We also prove that under general conditions, the asymptotic variance of the pseudo-marginal algorithm converges to the asymptotic variance of the marginal algorithm if the accuracy of the estimators is increased.


IEEE Transactions on Signal Processing | 2001

Bayesian deconvolution of noisy filtered point processes

Christophe Andrieu; Éric Barat; Arnaud Doucet

The detection and estimation of filtered point processes using noisy data is an essential requirement in many seismic, ultrasonic, and nuclear applications. We address this joint detection/estimation problem using a Bayesian approach, which allows us to easily include any relevant prior information. Performing Bayesian inference for such a complex model is a challenging computational problem as it requires the evaluation of intricate high-dimensional integrals. We develop here an efficient stochastic procedure based on a reversible jump Markov chain Monte Carlo method to solve this problem and prove the geometric convergence of the algorithm. The proposed model and algorithm are demonstrated on an application arising in nuclear science.


IEEE Transactions on Information Theory | 2000

Simulated annealing for maximum a posteriori parameter estimation of hidden Markov models

Christophe Andrieu; Arnaud Doucet

Hidden Markov models are mixture models in which the populations from one observation to the next are selected according to an unobserved finite state-space Markov chain. Given a realization of the observation process, our aim is to estimate both the parameters of the Markov chain and of the mixture model in a Bayesian framework. We present an original simulated annealing algorithm which, in the same way as the EM (expectation-maximization) algorithm, relies on data augmentation, and is based on stochastic simulation of the hidden Markov chain. This algorithm is shown to converge toward the set of maximum a posteriori (MAP) parameters under suitable regularity conditions.


Bernoulli | 2018

Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers

Christophe Andrieu; Anthony Lee; Matti Vihola

We establish quantitative bounds for rates of convergence and asymptotic variances for iterated conditional sequential Monte Carlo (i-cSMC) Markov chains and associated particle Gibbs samplers [J. R. Stat. Soc. Ser. B. Stat. Methodol. 72 (2010) 269–342]. Our main findings are that the essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity of the i-cSMC Markov chain, as well as quantitative bounds on its (uniformly geometric) rate of convergence. Furthermore, we show that the i-cSMC Markov chain cannot even be geometrically ergodic if this essential boundedness does not hold in many applications of interest. Our sufficiency and quantitative bounds rely on a novel non-asymptotic analysis of the expectation of a standard normalizing constant estimate with respect to a “doubly conditional” SMC algorithm. In addition, our results for i-cSMC imply that the rate of convergence can be improved arbitrarily by increasing NN, the number of particles in the algorithm, and that in the presence of mixing assumptions, the rate of convergence can be kept constant by increasing NN linearly with the time horizon. We translate the sufficiency of the boundedness condition for i-cSMC into sufficient conditions for the particle Gibbs Markov chain to be geometrically ergodic and quantitative bounds on its geometric rate of convergence, which imply convergence of properties of the particle Gibbs Markov chain to those of its corresponding Gibbs sampler. These results complement recently discovered, and related, conditions for the particle marginal Metropolis–Hastings (PMMH) Markov chain.

Collaboration


Dive into the Christophe Andrieu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matti Vihola

University of Jyväskylä

View shared research outputs
Top Co-Authors

Avatar

A. Doucet

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur Gretton

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge