Michael Fink
Hebrew University of Jerusalem
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Fink.
international conference on machine learning | 2007
Yonatan Amit; Michael Fink; Nathan Srebro; Shimon Ullman
This paper suggests a method for multiclass learning with many classes by simultaneously learning shared characteristics common to the classes, and predictors for the classes in terms of these characteristics. We cast this as a convex optimization problem, using trace-norm regularization and study gradient-based optimization both for the linear case and the kernelized setting.
international conference on machine learning | 2006
Michael Fink; Shai Shalev-Shwartz; Yoram Singer; Shimon Ullman
We describe a general framework for online multiclass learning based on the notion of hypothesis sharing. In our framework sets of classes are associated with hypotheses. Thus, all classes within a given set share the same hypothesis. This framework includes as special cases commonly used constructions for multiclass categorization such as allocating a unique hypothesis for each class and allocating a single common hypothesis for all classes. We generalize the multiclass Perceptron to our framework and derive a unifying mistake bound analysis. Our construction naturally extends to settings where the number of classes is not known in advance but, rather, is revealed along the online learning process. We demonstrate the merits of our approach by comparing it to previous methods on both synthetic and natural datasets.
computer vision and pattern recognition | 2004
Kobi Levi; Michael Fink; Yair Weiss
In the last few years, object detection techniques have progressed immensely. Impressive detection results have been achieved for many objects such as faces [11, 14, 9] and cars [11]. The robustness of these systems emerges from a training stage utilizing thousands of positive examples. One approach to enable learning from a small set of training examples is to find an efficient set of features that accurately represent the target object. Unfortunately, automatically selecting such a feature set is a difficult task in itself. In this paper we present a novel feature selection method that is based on the notion of object categories. We assume that when learning to recognize a new object (like an apple) we also know a category it belongs to (fruit). We further assume that features that are useful for learning other objects in the same category (e.g. pear or orange) will also be useful for learning the novel object. This leads to a simple criterion for selecting features and building classifiers. We show that our method gives significant improvement in detection performance in challenging domains.
Multimedia Tools and Applications | 2008
Michael Fink; Michele Covell; Shumeet Baluja
This paper describes mass personalization, a framework for combining mass media with a highly personalized Web-based experience. We introduce four applications for mass personalization: personalized content layers, ad hoc social communities, real-time popularity ratings and virtual media library services. Using the ambient audio originating from a television, the four applications are available with no more effort than simple television channel surfing. Our audio identification system does not use dedicated interactive TV hardware and does not compromise the user’s privacy. Feasibility tests of the proposed applications are provided both with controlled conversational interference and with “living-room” evaluations.
Proceedings of the Eighth Neural Computation and Psychology Workshop | 2004
Michael Fink; Gershon Ben-Shakhar; D. Horn
This study is aimed at detecting factors influencing perceptual feature creation. By teaching several new perceptual categories, we demonstrate the emergence of new internal representations. We focus on contrasting the role of two basic factors that govern feature creation. The first is the feature-set’s discriminative value and the second is the feature-set’s degree of parsimony. Several methods of exploring the structure of internal features are developed using an artificial neural network. These methods were empirically implemented in two experiments, both demonstrating a preference for parsimonious internal representations, even at the expense of feature informative value. Our results suggest that feature parsimony is maintained not only to optimize the perceptual system’s current resource management but also to aid future category learning.
neural information processing systems | 2003
Michael Fink; Pietro Perona
IEEE Computer | 2006
Michele Covell; Shumeet Baluja; Michael Fink
Archive | 2004
Michael Fink; Kobi Levi
international conference on artificial intelligence and statistics | 2007
Michael Fink
Archive | 2005
Michael Fink; Gershon Ben-Shakhar