Omiros Papaspiliopoulos
Pompeu Fabra University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Omiros Papaspiliopoulos.
Statistical Science | 2007
Omiros Papaspiliopoulos; Gareth O. Roberts; Martin Sköld
In this paper, we describe centering and noncentering methodology as complementary techniques for use in parametrization of broad classes of hierarchical models, with a view to the construction of effective MCMC algorithms for exploring posterior distributions from these models. We give a clear qualitative understanding as to when centering and noncentering work well, and introduce theory concerning the convergence time complexity of Gibbs samplers using centered and noncentered parametrizations. We give general recipes for the construction of noncentered parametrizations, including an auxiliary variable technique called the state-space expansion technique. We also describe partially noncentered methods, and demonstrate their use in constructing robust Gibbs sampler algorithms whose convergence properties are not overly sensitive to the data.
Annals of Statistics | 2009
Alexandros Beskos; Omiros Papaspiliopoulos; Gareth O. Roberts
This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models aid its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte Carlo MLE converges a.s. to the true MLE. For datasize n -> infinity, we show that the number of Monte Carlo iterations should be tuned as O (n(1/2)) and we demonstrate the consistency properties of the Monte Carlo MLE as an estimator of the true parameter value.
Annals of Statistics | 2008
Omiros Papaspiliopoulos; Gareth O. Roberts
We characterize the convergence of the Gibbs sampler which samples from the joint posterior distribution of parameters and missing data in hierarchical linear models with arbitrary symmetric error distributions. We show that the convergence can be uniform, geometric or subgeometric depending on the relative tail behavior of the error distributions, and on the parametrization chosen. Our theory is applied to characterize the convergence of the Gibbs sampler on latent Gaussian process models. We indicate how the theoretical framework we introduce will be useful in analyzing more complex models.
arXiv: Statistics Theory | 2014
Sergios Agapiou; Johnathan M. Bardsley; Omiros Papaspiliopoulos; Andrew M. Stuart
Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized, resulting in a problem in ℝ^N, with an understanding that refining the discretization, that is, increasing N, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyperparameters in such a way that they are interpretable in the continuum limit N →∞ and so that their values may be compared between different discretization levels; and (ii) understanding the efficiency of algorithms for probing the posterior distribution as a function of large
Random Structures and Algorithms | 2011
Krzysztof Łatuszyński; Ioannis Kosmidis; Omiros Papaspiliopoulos; Gareth O. Roberts
N.
Statistical Science | 2017
Sergios Agapiou; Omiros Papaspiliopoulos; Daniel Sanz-Alonso; Andrew M. Stuart
Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modeling framework based on a Gaussian prior for the unknown field and an inverse-gamma prior for a hyperparameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinite-dimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as
Journal of Computational and Graphical Statistics | 2013
Omiros Papaspiliopoulos; Gareth O. Roberts; Osnat Stramer
N
Advances in Applied Probability | 2016
Omiros Papaspiliopoulos; Gareth O. Roberts; Kasia B. Taylor
increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension; we give numerical experiments which exhibit that our reparametrization prevents the slowing down. Our intuition on the behavior of the prior hyperparameter, with and without reparametrization, is sufficiently general to include a broad class of nonlinear inverse problems as well as other families of hyperpriors.
Bernoulli | 2014
Omiros Papaspiliopoulos; Matteo Ruggiero
Let s∈(0,1) be uniquely determined but only its approximations can be obtained with a finite computational effort. Assume one aims to simulate an event of probability s. Such settings are often encountered in statistical simulations. We consider two specific examples. First, the exact simulation of non-linear diffusions ([3]). Second, the celebrated Bernoulli factory problem ([10, 13]) of generating an f(p)-coin given a sequence X1,X2,… of independent tosses of a p-coin (with known f and unknown p). We describe a general framework and provide algorithms where this kind of problems can be fitted and solved. The algorithms are straightforward to implement and thus allow for effective simulation of desired events of probability s. Our methodology links the simulation problem to existence and construction of unbiased estimators.
Electronic Journal of Statistics | 2016
Omiros Papaspiliopoulos; Matteo Ruggiero; Dario Spanò
The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for the practitioner. The subject has attracted substantial interest from within a variety of communities. The objective of this paper is to overview and unify the resulting literature by creating an overarching framework. A general theory is presented, with a focus on the use of importance sampling in Bayesian inverse problems and filtering.