Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandros Beskos is active.

Publication


Featured researches published by Alexandros Beskos.


Annals of Applied Probability | 2005

Exact simulation of diffusions

Alexandros Beskos; Gareth O. Roberts

We describe a new, surprisingly simple algorithm, that simulates exact sample paths of a class of stochastic differential equations. It involves rejection sampling and, when applicable, returns the location of the path at a random collection of time instances. The path can then be completed without further reference to the dynamics of the target process.


Bernoulli | 2013

Optimal tuning of the hybrid Monte Carlo algorithm

Alexandros Beskos; Natesh S. Pillai; Gareth O. Roberts; Jesus M. Sanz-Serna; Andrew M. Stuart

We investigate the properties of the Hybrid Monte Carlo algorithm (HMC) in high dimensions. HMC develops a Markov chain reversible w.r.t. a given target distribution . by using separable Hamiltonian dynamics with potential -log .. The additional momentum variables are chosen at random from the Boltzmann distribution and the continuous-time Hamiltonian dynamics are then discretised using the leapfrog scheme. The induced bias is removed via a Metropolis- Hastings accept/reject rule. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an O(1) acceptance probability as the dimension d of the state space tends to ., the leapfrog step-size h should be scaled as h=l ×d−1/ 4 . Therefore, in high dimensions, HMC requires O(d1/ 4 ) steps to traverse the state space. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (to three decimal places). This is the choice which optimally balances the cost of generating a proposal, which decreases as l increases (because fewer steps are required to reach the desired final integration time), against the cost related to the average number of proposals required to obtain acceptance, which increases as l increases


Annals of Applied Probability | 2014

On the stability of sequential Monte Carlo methods in high dimensions

Alexandros Beskos; Dan Crisan; Ajay Jasra

We investigate the stability of a Sequential Monte Carlo (SMC) method applied to the problem of sampling from a target distribution on Rd for large d. It is well known [Bengtsson, Bickel and Li, In Probability and Statistics: Essays in Honor of David A. Freedman, D. Nolan and T. Speed, eds. (2008) 316–334 IMS; see also Pushing the Limits of Contemporary Statistics (2008) 318–32 9 IMS, Mon. Weather Rev. (2009) 136 (2009) 4629–4640] that using a single importance sampling step, one produces an approximation for the target that deteriorates as the dimension d increases, unless the number of Monte Carlo samples N increases at an exponential rate in d. We show that this degeneracy can be avoided by introducing a sequence of artificial targets, starting from a “simple” density and moving to the one of interest, using an SMC method to sample from the sequence; see, for example, Chopin [Biometrika 89 (2002) 539–551]; see also [J. R. Stat. Soc. Ser. B Stat. Methodol. 68 (2006) 411–436, Phys. Rev. Lett. 78 (1997) 2690–2693, Stat. Comput. 11 (2001) 125–139]. Using this class of SMC methods with a fixed number of samples, one can produce an approximation for which the effective sample size (ESS) converges to a random variable eN as d → ∞ with 1 < eN < N. The convergence is achieved with a computational cost proportional to Nd2. If eN � N, we can raise its value by introducing a number of resampling steps, say m (where m is independent of d). In this case, the ESS converges to a random variable eN,m as d → ∞ and limm→∞ eN,m = N. Also, we show that the Monte Carlo error for estimating a fixed-dimensional marginal expectation is of order √ 1 N uniformly in d. The results imply that, in high dimensions, SMC algorithms can efficiently control the variability of the importance sampling weights and estimate fixed-dimensional marginals at a cost which is less than exponential in d and indicate that resampling leads to a reduction in the Monte Carlo error and increase in the ESS. All of our analysis is made under the assumption that the target density is i.i.d.


Annals of Applied Probability | 2009

Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions

Alexandros Beskos; Gareth O. Roberts; Andrew M. Stuart

We investigate local MCMC algorithms, namely the random-walk Metropolis and the Langevin algorithms, and identify the optimal choice of the local step-size as a function of the dimension


Stochastics and Dynamics | 2008

MCMC methods for diffusion bridges

Alexandros Beskos; Gareth O. Roberts; Andrew M. Stuart; Jochen Voss

n


Annals of Statistics | 2009

MONTE-CARLO MAXIMUM LIKELIHOOD ESTIMATION FOR DISCRETELY OBSERVED DIFFUSION PROCESSES

Alexandros Beskos; Omiros Papaspiliopoulos; Gareth O. Roberts

of the state space, asymptotically as


Archive | 2009

MCMC methods for sampling function space

Alexandros Beskos; Andrew M. Stuart

n\to\infty


Advances in Applied Probability | 2014

Error bounds and normalising constants for sequential Monte Carlo samplers in high dimensions

Alexandros Beskos; Dan Crisan; Ajay Jasra; Nick Whiteley

. We consider target distributions defined as a change of measure from a product law. Such structures arise, for instance, in inverse problems or Bayesian contexts when a product prior is combined with the likelihood. We state analytical results on the asymptotic behavior of the algorithms under general conditions on the change of measure. Our theory is motivated by applications on conditioned diffusion processes and inverse problems related to the 2D Navier--Stokes equation.


arXiv: Computation | 2014

Sequential Monte Carlo Methods for High-Dimensional Inverse Problems: A Case Study for the Navier-Stokes Equations ∗

Nikolas Kantas; Alexandros Beskos; Ajay Jasra

We present and study a Langevin MCMC approach for sampling nonlinear diffusion bridges. The method is based on recent theory concerning stochastic partial differential equations (SPDEs) reversible with respect to the target bridge, derived by applying the Langevin idea on the bridge pathspace. In the process, a Random-Walk Metropolis algorithm and an Independence Sampler are also obtained. The novel algorithmic idea of the paper is that proposed moves for the MCMC algorithm are determined by discretising the SPDEs in the time direction using an implicit scheme, parametrised by θ ∈ [0,1]. We show that the resulting infinite-dimensional MCMC sampler is well-defined only if θ = 1/2, when the MCMC proposals have the correct quadratic variation. Previous Langevin-based MCMC methods used explicit schemes, corresponding to θ = 0. The significance of the choice θ = 1/2 is inherited by the finite-dimensional approximation of the algorithm used in practice. We present numerical results illustrating the phenomenon and the theory that explains it. Diffusion bridges (with additive noise) are representative of the family of laws defined as a change of measure from Gaussian distributions on arbitrary separable Hilbert spaces; the analysis in this paper can be readily extended to target laws from this family and an example from signal processing illustrates this fact.


Stochastic Processes and their Applications | 2017

Multilevel sequential Monte Carlo samplers

Alexandros Beskos; Ajay Jasra; Kody J. H. Law; Raul Tempone; Yan Zhou

This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models aid its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte Carlo MLE converges a.s. to the true MLE. For datasize n -> infinity, we show that the number of Monte Carlo iterations should be tuned as O (n(1/2)) and we demonstrate the consistency properties of the Monte Carlo MLE as an estimator of the true parameter value.

Collaboration


Dive into the Alexandros Beskos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajay Jasra

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Andrew M. Stuart

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Crisan

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Konstantinos Kalogeropoulos

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Paulin

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yan Zhou

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge