Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amir Navot is active.

Publication


Featured researches published by Amir Navot.


international conference on machine learning | 2004

Margin based feature selection - theory and algorithms

Ran Gilad-Bachrach; Amir Navot; Naftali Tishby

Feature selection is the task of choosing a small set out of a given set of features that capture the relevant properties of the data. In the context of supervised classification problems the relevance is determined by the given labels on the training data. A good choice of features is a key for building compact and accurate classifiers. In this paper we introduce a margin based feature selection criterion and apply it to measure the quality of sets of features. Using margins we devise novel selection algorithms for multi-class classification problems and provide theoretical generalization bound. We also study the well known Relief algorithm and show that it resembles a gradient ascent over our margin criterion. We apply our new algorithm to various datasets and show that our new Simba algorithm, which directly optimizes the margin, outperforms Relief.


conference on learning theory | 2003

An Information theoretic tradeoff between complexity and accuracy

Ran Gilad-Bachrach; Amir Navot; Naftali Tishby

A fundamental question in learning theory is the quantification of the basic tradeoff between the complexity of a model and its predictive accuracy. One valid way of quantifying this tradeoff, known as the “Information Bottleneck”, is to measure both the complexity of the model and its prediction accuracy by using Shannon’s mutual information. In this paper we show that the Information Bottleneck framework answers a well defined and known coding problem and at same time it provides a general relationship between complexity and prediction accuarcy, measured by mutual information. We study the nature of this complexity-accuracy tradeoff and discuss some of its theoretical properties. Furthermore, we present relations to classical information theoretic problems, such as rate-distortion theory, cost-capacity tradeoff and source coding with side information.


SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection | 2005

Is feature selection still necessary

Amir Navot; Ran Gilad-Bachrach; Yiftah Navot; Naftali Tishby

Feature selection is usually motivated by improved computational complexity, economy and problem understanding, but it can also improve classification accuracy in many cases. In this paper we investigate the relationship between the optimal number of features and the training set size. We present a new and simple analysis of the well-studied two-Gaussian setting. We explicitly find the optimal number of features as a function of the training set size for a few special cases and show that accuracy declines dramatically by adding too many features. Then we show empirically that Support Vector Machine (SVM), that was designed to work in the presence of a large number of features produces the same qualitative result for these examples. This suggests that good feature selection is still an important component in accurate classification.


Archive | 2006

Large Margin Principles for Feature Selection

Ran Gilad-Bachrach; Amir Navot; Naftali Tishby

In this paper we introduce a margin based feature selection criterion and apply it to measure the quality of sets of features. Using margins we devise novel selection algorithms for multi-class categorization problems and provide theoretical generalization bound. We also study the well known Relief algorithm and show that it resembles a gradient ascent over our margin criterion. We report promising results on various datasets.


conference on learning theory | 2004

Bayes and Tukey Meet at the Center Point

Ran Gilad-Bachrach; Amir Navot; Naftali Tishby

The Bayes classifier achieves the minimal error rate by constructing a weighted majority over all concepts in the concept class. The Bayes Point [1] uses the single concept in the class which has the minimal error. This way, the Bayes Point avoids some of the deficiencies of the Bayes classifier. We prove a bound on the generalization error for Bayes Point Machines when learning linear classifiers, and show that it is at most ~1.71 times the generalization error of the Bayes classifier, independent of the input dimension and length of training. We show that when learning linear classifiers, the Bayes Point is almost identical to the Tukey Median [2] and Center Point [3]. We extend these definitions beyond linear classifiers and define the Bayes Depth of a classifier. We prove generalization bound in terms of this new definition. Finally we provide a new concentration of measure inequality for multivariate random variables to the Tukey Median.


Archive | 2006

Margin Based Feature Selection and Infogain with Standard Classifiers

Ran Gilad-Bachrach; Amir Navot

The decision to devote a week or two to playing with the feature selection challenge (FSC) turned into a major effort that took up most of our time a few months. In most cases we used standard algorithms, with obvious modifications for the balanced error measure. Surprisingly enough, the naive methods we used turned out to be among the best submissions to the FSC.


neural information processing systems | 2002

Margin Analysis of the LVQ Algorithm

Koby Crammer; Ran Gilad-Bachrach; Amir Navot; Naftali Tishby


neural information processing systems | 2005

Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity

Amir Navot; Lavi Shpigelman; Naftali Tishby; Eilon Vaadia


neural information processing systems | 2005

Query by Committee Made Real

Ran Gilad-Bachrach; Amir Navot; Naftali Tishby


Journal of Machine Learning Research | 2008

Learning to Select Features using their Properties

Eyal Krupka; Amir Navot; Naftali Tishby

Collaboration


Dive into the Amir Navot's collaboration.

Top Co-Authors

Avatar

Naftali Tishby

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Ran Gilad-Bachrach

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Eilon Vaadia

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Koby Crammer

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yiftah Navot

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge