Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yevgeny Seldin is active.

Publication


Featured researches published by Yevgeny Seldin.


IEEE Transactions on Information Theory | 2012

PAC-Bayesian Inequalities for Martingales

Yevgeny Seldin; François Laviolette; Nicolò Cesa-Bianchi; John Shawe-Taylor; Peter Auer

We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian (probably approximately correct) analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability theory and statistics, where martingales are encountered. We also present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the [0, 1] interval by the expectation of the same function of independent Bernoulli random variables. This inequality is applied to derive a tighter analog of Hoeffding-Azumas inequality.


Bioinformatics | 2001

Markovian domain fingerprinting: statistical segmentation of protein sequences

Gill Bejerano; Yevgeny Seldin; Hanah Margalit; Naftali Tishby

MOTIVATION Characterization of a protein family by its distinct sequence domains is crucial for functional annotation and correct classification of newly discovered proteins. Conventional Multiple Sequence Alignment (MSA) based methods find difficulties when faced with heterogeneous groups of proteins. However, even many families of proteins that do share a common domain contain instances of several other domains, without any common underlying linear ordering. Ignoring this modularity may lead to poor or even false classification results. An automated method that can analyze a group of proteins into the sequence domains it contains is therefore highly desirable. RESULTS We apply a novel method to the problem of protein domain detection. The method takes as input an unaligned group of protein sequences. It segments them and clusters the segments into groups sharing the same underlying statistics. A Variable Memory Markov (VMM) model is built using a Prediction Suffix Tree (PST) data structure for each group of segments. Refinement is achieved by letting the PSTs compete over the segments, and a deterministic annealing framework infers the number of underlying PST models while avoiding many inferior solutions. We show that regions of similar statistics correlate well with protein sequence domains, by matching a unique signature to each domain. This is done in a fully automated manner, and does not require or attempt an MSA. Several representative cases are analyzed. We identify a protein fusion event, refine an HMM superfamily classification into the underlying families the HMM cannot separate, and detect all 12 instances of a short domain in a group of 396 sequences. CONTACT [email protected]; [email protected].


international conference on machine learning | 2008

Multi-classification by categorical features via clustering

Yevgeny Seldin; Naftali Tishby

We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.


international acm sigir conference on research and development in information retrieval | 2016

An Improved Multileaving Algorithm for Online Ranker Evaluation

Brian Brost; Ingemar J. Cox; Yevgeny Seldin; Christina Lioma

Online ranker evaluation is a key challenge in information retrieval. An important task in the online evaluation of rankers is using implicit user feedback for inferring preferences between rankers. Interleaving methods have been found to be efficient and sensitive, i.e. they can quickly detect even small differences in quality. It has recently been shown that multileaving methods exhibit similar sensitivity but can be more efficient than interleaving methods. This paper presents empirical results demonstrating that existing multileaving methods either do not scale well with the number of rankers, or, more problematically, can produce results which substantially differ from evaluation measures like NDCG. The latter problem is caused by the fact that they do not correctly account for the similarities that can occur between rankers being multileaved. We propose a new multileaving method for handling this problem and demonstrate that it substantially outperforms existing methods, in some cases reducing errors by as much as 50%.


conference on information and knowledge management | 2016

Multi-Dueling Bandits and Their Application to Online Ranker Evaluation

Brian Brost; Yevgeny Seldin; Ingemar J. Cox; Christina Lioma

Online ranker evaluation focuses on the challenge of efficiently determining, from implicit user feedback, which ranker out of a finite set of rankers is the best. It can be modeled by dueling bandits, a mathematical model for online learning under limited feedback from pairwise comparisons. Comparisons of pairs of rankers is performed by interleaving their result sets and examining which documents users click on. The dueling bandits model addresses the key issue of which pair of rankers to compare at each iteration. Methods for simultaneously comparing more than two rankers have recently been developed. However, the question of which rankers to compare at each iteration was left open. We address this question by proposing a generalization of the dueling bandits model that uses simultaneous comparisons of an unrestricted number of rankers. We evaluate our algorithm on standard large-scale online ranker evaluation datasets. Our experimentals show that the algorithm yields orders of magnitude gains in performance compared to state-of-the-art dueling bandit algorithms.


Science & Engineering Faculty | 2013

On the Relations and Differences Between Popper Dimension, Exclusion Dimension and VC-Dimension

Yevgeny Seldin; Bernhard Schölkopf

A high-level relationPopper dimension—( Exclusion dimension—( VC dimension—( between Karl Popper’s ideas on “falsifiability of scientific theories” and the notion of “overfitting”Overfitting in statistical learning theory can be easily traced. However, it was pointed out that at the level of technical details the two concepts are significantly different. One possible explanation that we suggest is that the process of falsification is an active process, whereas statistical learning theory is mainly concerned with supervised learningSupervised learning, which is a passive process of learning from examples arriving from a stationary distribution. We show that concepts that are closer (although still distant) to Karl Popper’s definitions of falsifiability can be found in the domain of learning using membership queries, and derive relations between Popper’s dimension, exclusion dimension, and the VC-dimensionVC dimension.


BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering | 2011

Reinforcement Learning with Bounded Information Loss

Jan Peters; Katharina Mülling; Yevgeny Seldin; Yasemin Altun

Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant or natural policy gradients, many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest two reinforcement learning methods, i.e., a model‐based and a model free algorithm that bound the loss in relative entropy while maximizing their return. The resulting methods differ significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems as well as novel evaluations in robotics. We also show a Bayesian bound motivation of this new approach [8].


Journal of Machine Learning Research | 2010

PAC-Bayesian Analysis of Co-clustering and Beyond

Yevgeny Seldin; Naftali Tishby


neural information processing systems | 2011

PAC-Bayesian Analysis of Contextual Bandits

Yevgeny Seldin; Peter Auer; John Shawe-Taylor; Ronald Ortner; François Laviolette


neural information processing systems | 2006

Information Bottleneck for Non Co-Occurrence Data

Yevgeny Seldin; Noam Slonim; Naftali Tishby

Collaboration


Dive into the Yevgeny Seldin's collaboration.

Top Co-Authors

Avatar

Naftali Tishby

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koby Crammer

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasin Abbasi-Yadkori

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge