Joris Bierkens
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joris Bierkens.
Statistics and Computing | 2016
Joris Bierkens
The classical Metropolis-Hastings (MH) algorithm can be extended to generate non-reversible Markov chains. This is achieved by means of a modification of the acceptance probability, using the notion of vorticity matrix. The resulting Markov chain is non-reversible. Results from the literature on asymptotic variance, large deviations theory and mixing time are mentioned, and in the case of a large deviations result, adapted, to explain how non-reversible Markov chains have favorable properties in these respects. We provide an application of NRMH in a continuous setting by developing the necessary theory and applying, as first examples, the theory to Gaussian distributions in three and nine dimensions. The empirical autocorrelation and estimated asymptotic variance for NRMH applied to these examples show significant improvement compared to MH with identical stepsize.
Systems & Control Letters | 2014
Joris Bierkens; Hilbert J. Kappen
Abstract We consider the minimization over probability measures of the expected value of a random variable, regularized by relative entropy with respect to a given probability distribution. In the general setting we provide a complete characterization of the situations in which a finite optimal value exists and the situations in which a minimizing probability distribution exists. Specializing to the case where the underlying probability distribution is Wiener measure, we characterize finite relative entropy changes of measure in terms of square integrability of the corresponding change of drift. For the optimal change of measure for the relative entropy weighted optimization, an expression involving the Malliavin derivative of the cost random variable is derived. The theory is illustrated by its application to several examples, including the case where the cost variable is the maximum of a standard Brownian motion over a finite time horizon. For this example we obtain an exact optimal drift, as well as an approximation of the optimal drift through a Monte-Carlo algorithm.
Journal of Physics A | 2014
Vladimir Chernyak; Michael Chertkov; Joris Bierkens; Hilbert J. Kappen
In Stochastic Optimal Control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.
Statistics & Probability Letters | 2018
Joris Bierkens; Alexandre Bouchard-Côté; Arnaud Doucet; Andrew Duncan; Paul Fearnhead; Thibaut Lienart; Gareth O. Roberts; Sebastian J. Vollmer
Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration. We show how they can be implemented in settings where the parameters live on a restricted domain
Statistical Science | 2018
Paul Fearnhead; Joris Bierkens; Murray Pollock; Gareth O. Roberts
Recently, there have been conceptually new developments in Monte Carlo methods through the introduction of new MCMC and sequential Monte Carlo (SMC) algorithms which are based on continuous-time, rather than discrete-time, Markov processes. This has led to some fundamentally new Monte Carlo algorithms which can be used to sample from, say, a posterior distribution. Interestingly, continuous-time algorithms seem particularly well suited to Bayesian analysis in big-data settings as they need only access a small sub-set of data points at each iteration, and yet are still guaranteed to target the true posterior distribution. Whilst continuous-time MCMC and SMC methods have been developed independently we show here that they are related by the fact that both involve simulating a piecewise deterministic Markov process. Furthermore, we show that the methods developed to date are just specific cases of a potentially much wider class of continuous-time Monte Carlo algorithms.We give an informal introduction to piecewise deterministic Markov processes, covering the aspects relevant to these new Monte Carlo algorithms, with a view to making the development of new continuoustime Monte Carlo more accessible. We focus on how and why sub-sampling ideas can be used with these algorithms, and aim to give insight into how these new algorithms can be implemented, and what are some of the issues that affect their efficiency.
Advances in Applied Probability | 2017
Joris Bierkens; A. B. Duncan
Abstract Markov chain Monte Carlo (MCMC) methods provide an essential tool in statistics for sampling from complex probability distributions. While the standard approach to MCMC involves constructing discrete-time reversible Markov chains whose transition kernel is obtained via the Metropolis–Hastings algorithm, there has been recent interest in alternative schemes based on piecewise deterministic Markov processes (PDMPs). One such approach is based on the zig-zag process, introduced in Bierkens and Roberts (2016), which proved to provide a highly scalable sampling scheme for sampling in the big data regime; see Bierkens et al. (2016). In this paper we study the performance of the zig-zag sampler, focusing on the one-dimensional case. In particular, we identify conditions under which a central limit theorem holds and characterise the asymptotic variance. Moreover, we study the influence of the switching rate on the diffusivity of the zig-zag process by identifying a diffusion limit as the switching rate tends to ∞. Based on our results we compare the performance of the zig-zag sampler to existing Monte Carlo methods, both analytically and through simulations.
Stochastic Analysis and Applications | 2010
Joris Bierkens; Onno van Gaans; Sjoerd Verduyn Lunel
In this article, we study the problem of estimating the pathwise Lyapunov exponent for linear stochastic systems with multiplicative noise and constant coefficients. We present a Lyapunov type matrix inequality that is closely related to this problem, and show under what conditions we can solve the matrix inequality. From this we can deduce an upper bound for the Lyapunov exponent. In the converse direction, it is shown that a necessary condition for the stochastic system to be pathwise asymptotically stable can be formulated in terms of controllability properties of the matrices involved.
Linear Algebra and its Applications | 2014
Joris Bierkens; André C. M. Ran
Abstract The positive stability and D-stability of singular M-matrices, perturbed by (non-trivial) nonnegative rank one perturbations, is investigated. In special cases positive stability or D-stability can be established. In full generality this is not the case, as illustrated by a counterexample. However, matrices of the mentioned form are shown to be P-matrices.
arXiv: Computation | 2018
Joris Bierkens; Paul Fearnhead; Gareth O. Roberts
Journal of Evolution Equations | 2009
Joris Bierkens; Onno van Gaans; Sjoerd Verduyn Lunel