Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jimmy Olsson is active.

Publication


Featured researches published by Jimmy Olsson.


Bernoulli | 2008

Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models

Jimmy Olsson; Olivier Cappé; Randal Douc; Eric Moulines

This paper concerns the use of sequential Monte Carlo methods (SMC) for smoothing in general state space models. A well-known problem when applying the standard SMC technique in the smoothing mode is that the resampling mechanism introduces degeneracy of the approximation in the path space. However, when performing maximum likelihood estimation via the EM algorithm, all functionals involved are of additive form for a large subclass of models. To cope with the problem in this case, a modification of the standard method (based on a technique proposed by Kitagawa and Sato) is suggested. Our algorithm relies on forgetting properties of the filtering dynamics and the quality of the estimates produced is investigated, both theoretically and via simulations.


Annals of Applied Probability | 2011

Sequential Monte Carlo smoothing for general state space hidden Markov models

Randal Douc; Aurélien Garivier; Eric Moulines; Jimmy Olsson

Computing smoothing distributions, the distributions of one or more states conditional on past, present, and future observations is a recurring problem when operating on general hidden Markov models. The aim of this paper is to provide a foundation of particle-based approximation of such distributions and to analyze, in a common unifying framework, different schemes producing such approximations. In this setting, general convergence results, including exponential deviation inequalities and central limit theorems, are established. In particular, time uniform bounds on the marginal smoothing error are obtained under appropriate mixing conditions on the transition kernel of the latent chain. In addition, we propose an algorithm approximating the joint smoothing distribution at a cost that grows only linearly with the number of particles.


Annals of Statistics | 2011

Consistency of the maximum likelihood estimator for general hidden Markov models

Randal Douc; Eric Moulines; Jimmy Olsson; Ramon van Handel

Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for V-uniformly ergodic Markov chains.


IEEE Transactions on Signal Processing | 2011

Rao-Blackwellization of Particle Markov Chain Monte Carlo Methods Using Forward Filtering Backward Sampling

Jimmy Olsson; Tobias Rydén

Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distribution. In recent years there has been an increased interest in Monte Carlo-based methods, often involving particle filters, for approximate smoothing in nonlinear and/or non-Gaussian state-space models. One such method is to approximate filter distributions using a particle filter and then to simulate, using backward kernels, a state trajectory backwards on the set of particles. We show that by simulating multiple realizations of the particle filter and adding a Metropolis-Hastings step, one obtains a Markov chain Monte Carlo scheme whose stationary distribution is the exact smoothing distribution. This procedure expands upon a similar one recently proposed by Andrieu, Doucet, Holenstein, and Whiteley. We also show that simulating multiple trajectories from each realization of the particle filter can be beneficial from a perspective of variance versus computation time, and illustrate this idea using two examples.


Annals of Applied Probability | 2014

Long-term stability of sequential Monte Carlo methods under verifiable conditions

Randal Douc; Eric Moulines; Jimmy Olsson

This paper discusses particle filtering in general hidden Markov models (HMMs) and presents novel theoretical results on the long-term stability of bootstrap-type particle filters. More specifically, we establish that the asymptotic variance of the Monte Carlo estimates produced by the bootstrap filter is uniformly bounded in time. On the contrary to most previous results of this type, which in general presuppose that the state space of the hidden state process is compact (an assumption that is rarely satisfied in practice), our very mild assumptions are satisfied for a large class of HMMs with possibly non-compact state space. In addition, we derive a similar time uniform bound on the asymptotic L-p error. Importantly, our results hold for misspecified models; that is, we do not at all assume that the data entering into the particle filter originate from the model governing the dynamics of the particles or not even from an HMM.


Bernoulli | 2017

Efficient particle-based online smoothing in general hidden Markov models : the PaRIS algorithm

Jimmy Olsson; Johan Westerborn

This thesis consists of two papers studying online inference in general hidden Markov models using sequential Monte Carlo methods.The first paper present an novel algorithm, the particle-based, rapid incremental smoother (PaRIS), aimed at efficiently perform online approximation of smoothed expectations of additive state functionals in general hidden Markov models. The algorithm has, under weak assumptions, linear computational complexity and very limited memory requirements. The algorithm is also furnished with a number of convergence results, including a central limit theorem.The second paper focuses on the problem of online estimation of parameters in a general hidden Markov model. The algorithm is based on a forward implementation of the classical expectation-maximization algorithm. The algorithm uses the PaRIS algorithm to achieve an efficient algorithm.


IFAC Proceedings Volumes | 2011

An explicit variance reduction expression for the Rao-Blackwellised particle filter

Fredrik Lindsten; Thomas B. Schön; Jimmy Olsson

Particle a standard PF with an increased number of particles, which would also increase the accuracy, could be used instead. In this paper, we have analysed the asymptotic variance of the RBPF and provide an explicit expression for the obtained variance reduction. This expression could be used to make an ecient discrimination of when to apply Rao-Blackwellisation, and when not to.


Annals of Statistics | 2014

Comparison of asymptotic variances of inhomogeneous Markov chains with application to Markov chain Monte Carlo methods

Florian Maire; Randal Douc; Jimmy Olsson

In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.


arXiv: Computation | 2016

Convergence properties of weighted particle islands with application to the double bootstrap algorithm

Pierre Del Moral; Eric Moulines; Jimmy Olsson; Christelle Vergé

Particle island models [31] provide a means of parallelization of sequential Monte Carlo methods, and in this paper we present novel convergence results for algorithms of this sort. In particular we establish a central limit theorem—as the number of islands and the common size of the islands tend jointly to infinity—of the double bootstrap algorithm with possibly adaptive selection on the island level. For this purpose we introduce a notion of archipelagos of weighted islands and find conditions under which a set of convergence properties are preserved by different operations on such archipelagos. This theory allows arbitrary compositions of these operations to be straightforwardly analyzed, providing a very flexible framework covering the double bootstrap algorithm as a special case. Finally, we establish the long-term numerical stability of the double bootstrap algorithm by bounding its asymptotic variance under weak and easily checked assumptions satisfied typically for models with non-compact state space.


international conference on acoustics, speech, and signal processing | 2014

EFFICIENT PARTICLE-BASED ONLINE SMOOTHING IN GENERAL HIDDEN MARKOV MODELS

Johan Westerborn; Jimmy Olsson

This paper deals with the problem of estimating expectations of sums of additive functionals under the joint smoothing distribution in general hidden Markov models. Computing such expectations is a key ingredient in any kind of expectation-maximization-based parameter inference in models of this sort. The paper presents a computationally efficient algorithm for online estimation of these expectations in a forward manner. The proposed algorithm has a linear computational complexity in the number of particles and does not require old particles and weights to be stored during the computations. The algorithm avoids completely the well-known particle path degeneracy problem of the standard forward smoother. This makes it highly applicable within the framework of online expectation-maximization methods. The simulations show that the proposed algorithm provides the same precision as existing algorithms at a considerably lower computational cost.

Collaboration


Dive into the Jimmy Olsson's collaboration.

Top Co-Authors

Avatar

Randal Douc

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Westerborn

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Florian Maire

University College Dublin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatjana Pavlenko

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge