Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jüri Lember is active.

Publication


Featured researches published by Jüri Lember.


Electronic Journal of Statistics | 2008

Nonparametric Bayesian model selection and averaging

Subhashis Ghosal; Jüri Lember; Aad van der Vaart

We consider nonparametric Bayesian estimation of a probability density p based on a random sample of size n from this density using a hierarchical prior. The prior consists, for instance, of prior weights on the regularity of the unknown density combined with priors that are appropriate given that the density has this regularity. More generally, the hierarchy consists of prior weights on an abstract model index and a prior on a density model for each model index. We present a general theorem on the rate of contraction of the resulting posterior distribution as n?8, which gives conditions under which the rate of contraction is the one attached to the model that best approximates the true density of the obser- vations. This shows that, for instance, the posterior distribution can adapt to the smoothness of the underlying density. We also study the posterior distribution of the model index, and find that under the same conditions the posterior distribution gives negligible weight to models that are bigger than the optimal one, and thus selects the optimal model or smaller models that also approximate the true density well. We apply these result to log spline density models, where we show that the prior weights on the regularity index interact with the priors on the models, making the exact rates depend in a complicated way on the priors, but also that the rate is fairly robust to specification of the prior weights.


Annals of Probability | 2009

Standard deviation of the longest common subsequence

Jüri Lember; Heinrich Matzinger

Let L n be the length of the longest common subsequence of two independent i.i.d. sequences of Bernoulli variables of length n. We prove that the order of the standard deviation of L n is Vn, provided the parameter of the Bernoulli variables is small enough. This validates Watermans conjecture in this situation [Philos. Trans. R. Soc. Lond. Ser. B 344 (1994) 383-390]. The order conjectured by Chvatal and Sankoff [J. Appl. Probab. 12 (1975) 306-315], however, is different.


Acta Applicandae Mathematicae | 2003

On Bayesian adaptation

Subhashis Ghosal; Jüri Lember; van der Aw Aad Vaart

We show that Bayes estimators of an unknown density can adapt to unknown smoothness of the density. We combine prior distributions on each element of a list of log spline density models of different levels of regularity with a prior on the regularity levels to obtain a prior on the union of the models in the list. If the true density of the observations belongs to the model with a given regularity, then the posterior distribution concentrates near this true density at the rate corresponding to this regularity.


Bernoulli | 2008

The adjusted Viterbi training for hidden Markov models

Jüri Lember; Alexey Koloydenko

To estimate the emission parameters in hidden Markov models one commonly uses the EM algorithm or its variation. Our primary motivation, however, is the Philips speech recognition system wherein the EM algorithm is replaced by the Viterbi training algorithm. Viterbi training is faster and computationally less involved than EM, but it is also biased and need not even be consistent. We propose an alternative to the Viterbi training -- adjusted Viterbi training -- that has the same order of computational complexity as Viterbi training but gives more accurate estimators. Elsewhere, we studied the adjusted Viterbi training for a special case of mixtures, supporting the theory by simulations. This paper proves the adjusted Viterbi training to be also possible for more general hidden Markov models.


IEEE Transactions on Information Theory | 2010

A Constructive Proof of the Existence of Viterbi Processes

Jüri Lember; Alexey Koloydenko

Since the early days of digital communication, hidden Markov models (HMMs) have now been also routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. In an HMM <i>(X</i> <sub>t</sub>,<i>Y</i> <sub>t</sub>)<sub>t ¿ 1</sub>, observations <i>X</i> <sub>1</sub>,<i>X</i> <sub>2</sub>,... are assumed to be conditionally independent given a Markov process <i>Y</i> <sub>1</sub>,<i>Y</i> <sub>2</sub>,..., which itself is not observed; moreover, the conditional distribution of <i>X</i> <sub>t</sub> depends solely on <i>Y</i> <sub>t</sub>. Central to the theory and applications of HMM is the Viterbi algorithm to find a maximum <i>a posteriori</i> probability (MAP) estimate <i>v</i>(<i>x</i> <sub>1:</sub> <i>T</i>)=(<i>v</i> <sub>1</sub>,<i>v</i> <sub>2</sub>,...,<i>vT</i>) of <i>Y</i> <sub>1:</sub> <i>T</i> given observed data <i>x</i> <sub>1:</sub> <i>T</i>. Maximum <i>a posteriori</i> paths are also known as the Viterbi paths, or alignments. Recently, attempts have been made to study behavior of the Viterbi alignments when <i>T</i>¿ ¿. Thus, it has been shown that in some cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions and involved proofs which are existential. This work proves the existence of infinite Viterbi alignments in a more constructive manner and for a very general class of HMMs.


Probability in the Engineering and Informational Sciences | 2007

Adjusted Viterbi Training

Jüri Lember; Alexey Koloydenko

Viterbi training (VT) provides a fast but inconsistent estimator of hidden Markov models (HMM). The inconsistency is alleviated with a little extra computation when we enable VT to asymptotically fix the true values of the parameters. This relies on infinite Viterbi alignments and associated with them limiting probability distributions. First in a sequel, this article is a proof of concept; it focuses on mixture models, an important but special case of HMM where the limiting distributions can be calculated exactly. A simulated Gaussian mixture shows that our central algorithm (VA1) can significantly improve the accuracy of VT with little extra cost. Next in the sequel, we present elsewhere a theory of the adjusted VT for the general HMMs, where the limiting distributions are more challenging to find. Here, we also present another, more advanced correction to VT and verify its fast convergence and high accuracy; its computational feasibility requires additional investigation.


international symposium on information theory | 2015

New bounds for permutation codes in Ulam metric

Faruk Göloğlu; Jüri Lember; Ago-Erik Riet; Vitaly Skachek

New bounds on the cardinality of permutation codes equipped with the Ulam distance are presented. First, an integer-programming upper bound is derived, which improves on the Singleton-type upper bound in the literature for some lengths. Second, several probabilistic lower bounds are developed, which improve on the known lower bounds for large minimum distances. The results of a computer search for permutation codes are also presented.


Annals of Applied Probability | 2012

The rate of the convergence of the mean score in random sequence comparison

Jüri Lember; Heinrich Matzinger; Felipe Torres

We consider a general class of super-additive scores measuring the similarity of two independent sequences of


Archive | 2011

Theory of Segmentation

Jüri Lember; Kristi Kuljus; Alexey Koloydenko

n


Archive | 2013

Proportion of Gaps and Fluctuations of the Optimal Score in Random Sequence Comparison

Jüri Lember; Heinrich Matzinger; Felipe Torres

i.i.d. letters from a finite alphabet. Our object of interest is the mean score by letter

Collaboration


Dive into the Jüri Lember's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heinrich Matzinger

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kristi Kuljus

Swedish University of Agricultural Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Houdré

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Subhashis Ghosal

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge