Evgeniy Bart
Weizmann Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evgeniy Bart.
computer vision and pattern recognition | 2005
Evgeniy Bart; Shimon Ullman
We develop an object classification method that can learn a novel class from a single training example. In this method, experience with already learned classes is used to facilitate the learning of novel classes. Our classification scheme employs features that discriminate between class and non-class images. For a novel class, new features are derived by selecting features that proved useful for already learned classification tasks, and adapting these features to the new classification task. This adaptation is performed by replacing the features from already learned classes with similar features taken from the novel class. A single example of a novel class is sufficient to perform feature adaptation and achieve useful classification performance. Experiments demonstrate that the proposed algorithm can learn a novel class from a single training example, using 10 additional familiar classes. The performance is significantly improved compared to using no feature adaptation. The robustness of the proposed feature adaptation concept is demonstrated by similar performance gains across 107 widely varying object categories.
computer vision and pattern recognition | 2008
Evgeniy Bart; Ian Porteous; Pietro Perona; Max Welling
As more images and categories become available, organizing them becomes crucial. We present a novel statistical method for organizing a collection of images into a tree-shaped hierarchy. The method employs a non-parametric Bayesian model and is completely unsupervised. Each image is associated with a path through a tree. Similar images share initial segments of their paths and therefore have a smaller distance from each other. Each internal node in the hierarchy represents information that is common to images whose paths pass through that node, thus providing a compact image representation. Our experiments show that a disorganized collection of images will be organized into an intuitive taxonomy. Furthermore, we find that the taxonomy allows good image categorization and, in this respect, is superior to the popular LDA model.
european conference on computer vision | 2004
Evgeniy Bart; Evgeny Byvatov; Shimon Ullman
We develop a novel approach to view-invariant recognition and apply it to the task of recognizing face images under widely separated viewing directions. Our main contribution is a novel object representation scheme using ‘extended fragments’ that enables us to achieve a high level of recognition performance and generalization across a wide range of viewing conditions. Extended fragments are equivalence classes of image fragments that represent informative object parts under different viewing conditions. They are extracted automatically from short video sequences during learning. Using this representation, the scheme is unique in its ability to generalize from a single view of a novel object and compensate for a significant change in viewing direction without using 3D information. As a result, novel objects can be recognized from viewing directions from which they were not seen in the past. Experiments demonstrate that the scheme achieves significantly better generalization and recognition performance than previously used methods.
british machine vision conference | 2005
Evgeniy Bart; Shimon Ullman
We describe an object classification method that can learn from a single training example. In this method, a novel class is characterized by its similarity to a number of previously learned, familiar classes. We demonstrate that this similarity is well-preserved across different class instances. As a result, it generalizes well to new instances of the novel class. A simple comparison of the similarity patterns is therefore sufficient to obtain useful classification performance from a single training example. The similarity between the novel class and the familiar classes in the proposed method can be evaluated using a wide variety of existing classification schemes. It can therefore combine the merits of many different classification methods. Experiments on a database of 107 widely varying object classes demonstrate that the proposed method significantly improves the performance of the baseline algorithm.
Neural Networks | 2004
Shimon Ullman; Evgeniy Bart
In performing recognition, the visual system shows a remarkable capacity to distinguish between significant and immaterial image changes, to learn from examples to recognize new classes of objects, and to generalize from known to novel objects. Here we focus on one aspect of this problem, the ability to recognize novel objects from different viewing directions. This problem of view-invariant recognition is difficult because the image of an object seen from a novel viewing direction can be substantially different from all previously seen images of the same object. We describe an approach to view-invariant recognition that uses extended features to generalize across changes in viewing directions. Extended features are equivalence classes of informative image fragments, which represent object parts under different viewing conditions. This representation is extracted during learning from images of moving objects, and it allows the visual system to generalize from a single view of a novel object, and to compensate for large changes in the viewing direction, without using three-dimensional information. We describe the model, its implementation and performance on natural face images, compare it to alternative approaches, discuss its biological plausibility, and its extension to other aspects of visual recognition. The results of the study suggest that the capacity of the recognition system to generalize to novel conditions in an efficient and flexible manner depends on the ongoing extraction of different families of informative features, acquired for different tasks and different object classes.
Journal of Computational Neuroscience | 2005
Evgeniy Bart; Shaowen Bao; David Holcman
We present a rate model of the spontaneous activity in the auditory cortex, based on synaptic depression. A Stochastic integro-differential system of equations is derived and the analysis reveals two main regimes. The first regime corresponds to a normal activity. The second regime corresponds to epileptic spiking. A detailed analysis of each regime is presented and we prove in particular that synaptic depression stabilizes the global cortical dynamics. The transition between the two regimes is induced by a change in synaptic connectivity: when the overall connectivity is strong enough, an epileptic activity is spontaneously generated. Numerical simulations confirm the predictions of the theoretical analysis. In particular, our results explain the transition from normal to epileptic regime which can be induced in rats auditory cortex, following a specific pairing protocol. A change in the cortical maps reorganizes the synaptic connectivity and this transition between regimes is accounted for by our model. We have used data from recording experiments to fit synaptic weight distributions. Simulations with the fitted distributions are qualitatively similar to the real EEG recorded in vivo during the experiments.We conclude that changes in the synaptic weight function in our model, which affects excitatory synapses organization and reproduces the changes in cortical map connectivity can be understood as the main mechanism to explain the transitions of the EEG from the normal to the epileptic regime in the auditory cortex.
computer vision and pattern recognition | 2004
Evgeniy Bart; Shimon Ullman
We develop a novel technique for class-based matching of object parts across large changes in viewing conditions. Given a set of images of objects from a given class under different viewing conditions, the algorithm identifies corresponding regions depicting the same object part in different images. The technique is based on using the equivalence of corresponding features in different viewing conditions. This equivalence-based matching scheme is not restricted to planar components or affine transformations. As a result, it identifies corresponding parts more accurately and under more general conditions than previous methods. The scheme is general and works for a variety of natural object classes. We demonstrate that using the proposed methods, a dense set of accurate correspondences can be obtained. Experimental comparisons to several known techniques are presented. An application to the problem of invariant object recognition is shown, and additional applications to wide-baseline stereo are discussed.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008
Evgeniy Bart; Shimon Ullman
We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the features appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition.
british machine vision conference | 2004
Evgeniy Bart; Shimon Ullman
Image normalization refers to eliminating image variations (such as noise, illumination, or occlusion) that are related to conditions of image acquisition and are irrelevant to object identity. Image normalization can be used as a preprocessing stage to assist computer or human object perception. In this paper, a class-based image normalization method is proposed. Objects in this method are represented in the PCA basis, and mutual information is used to identify irrelevant principal components. These components are then discarded to obtain a normalized image which is not affected by the specic conditions of image acquisition. The method is demonstrated to produce visually pleasing results and to improve signicantly the accuracy of known recognition algorithms. The use of mutual information is a signicant advantage over the standard method of discarding components according to the eigenvalues, since eigenvalues correspond to variance and have no direct relation to the relevance of components to representation. An additional advantage of the proposed algorithm is that many types of image variations are handled in a unied framework.
Frontiers in Neuroscience | 2018
Jay Hegdé; Evgeniy Bart
In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a “gut feeling” for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of “expert” subjects to categorize novel, naturalistic 3-D objects called “digital embryos” into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert’s explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user’s performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions.