Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nick Whiteley is active.

Publication


Featured researches published by Nick Whiteley.


IEEE Transactions on Aerospace and Electronic Systems | 2010

Auxiliary Particle Implementation of Probability Hypothesis Density Filter

Nick Whiteley; Sumeetpal S. Singh; Simon J. Godsill

Optimal Bayesian multi-target filtering is, in general, computationally impractical owing to the high dimensionality of the multi-target state. The probability hypothesis density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed sequential Monte Carlo (SMC) implementations of the PHD filter. However these implementations are the equivalent of the bootstrap particle filter, and the latter is well known to be inefficient. Drawing on ideas from the auxiliary particle filter (APF), we present an SMC implementation of the PHD filter, which employs auxiliary variables to enhance its efficiency. Numerical examples are presented for two scenarios, including a challenging nonlinear observation model.


Bernoulli | 2016

On the role of interaction in sequential Monte Carlo algorithms

Nick Whiteley; Anthony Lee; Kari Heine

We introduce a general form of sequential Monte Carlo algorithm defined in terms of a parameterized resampling mechanism. We find that a suitably generalized notion of the Effective Sample Size (ESS), widely used to monitor algorithm degeneracy, appears naturally in a study of its convergence properties. We are then able to phrase sufficient conditions for time-uniform convergence in terms of algorithmic control of the ESS, in turn achievable by adaptively modulating the interaction between particles. This leads us to suggest novel algorithms which are, in senses to be made precise, provably stable and yet designed to avoid the degree of interaction which hinders parallelization of standard algorithms. As a byproduct we prove time-uniform convergence of the popular adaptive resampling particle filter.


Annals of Applied Probability | 2013

Stability properties of some particle filters

Nick Whiteley

Under multiplicative drift and other regularity conditions, it is established that the asymptotic variance associated with a particle filter approximation of the prediction filter is bounded uniformly in time, and the non-asymptotic, relative variance associated with a particle approximation of the normalizing constant is bounded linearly in time. The conditions are demonstrated to hold for some hidden Markov models on non-compact state spaces. The particle stability results are obtained by proving v-norm multiplicative stability and exponential moment results for the underlying Feynman-Kac formulae. Particle filters have become very popular devices for approximate solution of non-linear filtering problems in hidden Markov models (HMM’s) and various aspects of their theoretical properties are now well understood. However, there are still very few results which establish some form of stability over time of particle filtering methods on non-compact spaces, at least without resorting to algorithmic modifications which involve a random computational expense. The aim of the present work is to establish theoretical guarantees about some stability properties of a standard particle filter, under assumptions which are verifiable for some HMM’s with non-compact state spaces. It is now well known that, under mild conditions, the error associated with particle approximation of filtering distributions satisfies a central limit theorem. The first stability property we obtain is a time-uniform bound on the corresponding asymptotic variance. Making use of some recent results on functional expansions for particle approximation measures, the second stability property we obtain is a linear-in-time bound on the non-asymptotic, relative variance of the particle approximations of normalizing constants. These two properties are established by first proving some multiplicative stability and exponential moment results for the FeynmanKac formulae underlying the particle filter. The adopted approach involves Lyapunov function, multiplicative stability ideas in a weighted ∞-norm setting, which allows treatment of a noncompact state space. We thus obtain stability results which hold under weaker assumptions than those existing in the literature. The main restriction is that our assumptions are typically satisfied under some constraints on the observation component of the HMM and/or the observation sequence driving the filter. On the other hand, subject to these constraints, our stability results


Advances in Applied Probability | 2014

Error bounds and normalising constants for sequential Monte Carlo samplers in high dimensions

Alexandros Beskos; Dan Crisan; Ajay Jasra; Nick Whiteley

In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd 2).


ieee signal processing workshop on statistical signal processing | 2014

Maximum marginal likelihood estimation of the granularity coefficient of a Potts-Markov random field within an MCMC algorithm

Marcelo Pereyra; Nick Whiteley; Christophe Andrieu; Jean-Yves Tourneret

This paper addresses the problem of estimating the Potts-Markov random field parameter β jointly with the unknown parameters of a Bayesian image segmentation model. We propose a new adaptive Markov chain Monte Carlo (MCMC) algorithm for performing joint maximum marginal likelihood estimation of β and maximum-a-posteriori unsupervised image segmentation. The method is based on a stochastic gradient adaptation technique whose computational complexity is significantly lower than that of the competing MCMC approaches. This adaptation technique can be easily integrated to existing MCMC methods where β was previously assumed to be known. Experimental results on synthetic data and on a real 3D real image show that the proposed method produces segmentation results that are as good as those obtained with state-of-the-art MCMC methods and at much lower computational cost.


Stochastic Analysis and Applications | 2012

Sequential Monte Carlo samplers: error bounds and insensitivity to initial conditions

Nick Whiteley

This article addresses finite sample stability properties of sequential Monte Carlo methods for approximating sequences of probability distributions. The results presented herein are applicable in the scenario where the start and end distributions in the sequence are fixed and the number of intermediate steps is a parameter of the algorithm. Under assumptions which hold on noncompact spaces, it is shown that the effect of the initial distribution decays exponentially fast in the number of intermediate steps and the corresponding stochastic error is stable in 𝕃 p norm.


Stochastic Analysis and Applications | 2014

Approximate Bayesian Computation for Smoothing

James S. Martin; Ajay Jasra; Sumeetpal S. Singh; Nick Whiteley; Pierre Del Moral; Emma J. McCoy

We consider a method for approximate inference in hidden Markov models (HMMs). The method circumvents the need to evaluate conditional densities of observations given the hidden states. It may be considered an instance of Approximate Bayesian Computation (ABC) and it involves the introduction of auxiliary variables valued in the same space as the observations. The quality of the approximation may be controlled to arbitrary precision through a parameter ε > 0. We provide theoretical results which quantify, in terms of ε, the ABC error in approximation of expectations of additive functionals with respect to the smoothing distributions. Under regularity assumptions, this error is , where n is the number of time steps over which smoothing is performed. For numerical implementation, we adopt the forward-only sequential Monte Carlo (SMC) scheme of [14] and quantify the combined error from the ABC and SMC approximations. This forms some of the first quantitative results for ABC methods which jointly treat the ABC and simulation errors, with a finite number of data and simulated samples.


Stochastic Processes and their Applications | 2012

Linear variance bounds for particle approximations of time-homogeneous Feynman–Kac formulae

Nick Whiteley; Nikolas Kantas; Ajay Jasra

This article establishes sufficient conditions for a linear-in-time bound on the non-asymptotic variance for particle approximations of time-homogeneous Feynman–Kac formulae. These formulae appear in a wide variety of applications including option pricing in finance and risk sensitive control in engineering. In direct Monte Carlo approximation of these formulae, the non-asymptotic variance typically increases at an exponential rate in the time parameter. It is shown that a linear bound holds when a non-negative kernel, defined by the logarithmic potential function and Markov kernel which specify the Feynman–Kac model, satisfies a type of multiplicative drift condition and other regularity assumptions. Examples illustrate that these conditions are general and flexible enough to accommodate two rather extreme cases, which can occur in the context of a non-compact state space: (1) when the potential function is bounded above, not bounded below and the Markov kernel is not ergodic; and (2) when the potential function is not bounded above, but the Markov kernel itself satisfies a multiplicative drift condition.


Journal of Computational and Graphical Statistics | 2011

Monte Carlo filtering of piecewise deterministic processes

Nick Whiteley; Adam M. Johansen; Simon J. Godsill

We present efficient Monte Carlo algorithms for performing Bayesian inference in a broad class of models: those in which the distributions of interest may be represented by time marginals of continuous-time jump processes conditional on a realization of some noisy observation sequence. The sequential nature of the proposed algorithm makes it particularly suitable for online estimation in time series. We demonstrate that two existing schemes can be interpreted as particular cases of the proposed method. Results are provided which illustrate significant performance improvements relative to existing methods. The Appendix to this document can be found online.


Statistical Analysis and Data Mining | 2016

Forest resampling for distributed sequential Monte Carlo

Anthony Lee; Nick Whiteley

This paper brings explicit considerations of distributed computing architectures and data structures into the rigorous design of Sequential Monte Carlo SMC methods. A theoretical result established recently by the authors shows that adapting interaction between particles to suitably control the effective sample size ESS is sufficient to guarantee stability of SMC algorithms. Our objective is to leverage this result and devise algorithms which are thus guaranteed to work well in a distributed setting. We make three main contributions to achieve this. First, we study mathematical properties of the ESS as a function of matrices and graphs that parameterize the interaction among particles. Secondly, we show how these graphs can be induced by tree data structures which model the logical network topology of an abstract distributed computing environment. Finally, we present efficient distributed algorithms that achieve the desired ESS control, perform resampling and operate on forests associated with these trees.

Collaboration


Dive into the Nick Whiteley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kari Heine

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajay Jasra

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge