Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daphna Weinshall is active.

Publication


Featured researches published by Daphna Weinshall.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Flexible syntactic matching of curves and its application to automatic hierarchical classification of silhouettes

Yoram Gdalyahu; Daphna Weinshall

Curve matching is one instance of the fundamental correspondence problem. Our flexible algorithm is designed to match curves under substantial deformations and arbitrary large scaling and rigid transformations. A syntactic representation is constructed for both curves and an edit transformation which maps one curve to the other is found using dynamic programming. We present extensive experiments where we apply the algorithm to silhouette matching. In these experiments, we examine partial occlusion, viewpoint variation, articulation, and class matching (where silhouettes of similar objects are matched). Based on the qualitative syntactic matching, we define a dissimilarity measure and we compute it for every pair of images in a database of 121 images. We use this experiment to objectively evaluate our algorithm. First, we compare our results to those reported by others. Second, we use the dissimilarity values in order to organize the image database into shape categories. The veridical hierarchical organization stands as evidence to the quality of our matching and similarity estimation.


european conference on computer vision | 2002

Adjustment Learning and Relevant Component Analysis

Noam Shental; Tomer Hertz; Daphna Weinshall; Misha Pavel

We propose a new learning approach for image retrieval, which we call adjustment learning, and demonstrate its use for face recognition and color matching. Our approach is motivated by a frequently encountered problem, namely, that variability in the original data representation which is not relevant to the task may interfere with retrieval and make it very difficult. Our key observation is that in real applications of image retrieval, data sometimes comes in small chunks - small subsets of images that come from the same (but unknown) class. This is the case, for example, when a query is presented via a short video clip. We call these groups chunklets, and we call the paradigm which uses chunklets for unsupervised learning adjustment learning. Within this paradigm we propose a linear scheme, which we call Relevant Component Analysis; this scheme uses the information in such chunklets to reduce irrelevant variability in the data while amplifying relevant variability. We provide results using our method on two problems: face recognition (using a database publicly available on the web), and visual surveillance (using our own data). In the latter application chunklets are obtained automatically from the data without the need of supervision.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Classification with nonmetric distances: image retrieval and class representation

David W. Jacobs; Daphna Weinshall; Yoram Gdalyahu

A key problem in appearance-based vision is understanding how to use a set of labeled images to classify new images. Systems that model human performance, or that use robust image matching methods, often use nonmetric similarity judgments; but when the triangle inequality is not obeyed, most pattern recognition techniques are not applicable. Exemplar-based (nearest-neighbor) methods can be applied to a wide class of nonmetric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques are ill-suited to deal with nonmetric dataspaces. We develop techniques for solving this problem, emphasizing two points: First, we show that the distance between images is not a good measure of how well one image can represent another in nonmetric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in nonmetric spaces, boundary points are less significant for capturing the structure of a class than in Euclidean spaces. We suggest that atypical points may be more important in describing classes. We demonstrate the importance of these ideas to learning that generalizes from experience by improving performance. We also suggest ways of applying parametric techniques to supervised learning problems that involve a specific nonmetric distance functions, showing how to generalize the idea of linear discriminant functions in a way that may be more useful in nonmetric spaces.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Mosaicing new views: the Crossed-Slits projection

Assaf Zomet; Doron Feldman; Shmuel Peleg; Daphna Weinshall

We introduce anew kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the Crossed-Slits (X-Slits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. X-Slits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a X-Slits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions.


ieee symposium on security and privacy | 2006

Cognitive authentication schemes safe against spyware

Daphna Weinshall

Can we secure user authentication against eavesdropping adversaries, relying on human cognitive functions alone, unassisted by any external computational device? To accomplish this goal, we propose challenge response protocols that rely on a shared secret set of pictures. Under the considered brute-force attack the protocols are safe against eavesdropping, in that a modestly powered adversary who fully records a series of successful interactions cannot compute the users secret. Moreover, the protocols can be tuned to any desired level of security against random guessing, where security can be traded-off with authentication time. The proposed protocols have two drawbacks: First, training is required to familiarize the user with the secret set of pictures. Second, depending on the level of security required, entry time can be significantly longer than with alternative methods. We describe user studies showing that people can use these protocols successfully, and quantify the time it takes for training and for successful authentication. We show evidence that the secret can be maintained for a long time (up to a year) with relatively low loss


neural information processing systems | 1989

A self-organizing multiple-view representation of 3D objects

Daphna Weinshall; Shimon Edelman; Hh Bülthoff

We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the networks behavior was qualitatively similar to that of human subjects.


computer vision and pattern recognition | 2004

Learning distance functions for image retrieval

Tomer Hertz; Aharon Bar-Hillel; Daphna Weinshall

Image retrieval critically relies on the distance function used to compare a query image to images in the database. We suggest learning such distance functions by training binary classifiers with margins, where the classifiers are defined over the product space of pairs of images. The classifiers are trained to distinguish between pairs in which the images are from the same class and pairs, which contain images from different classes. The signed margin is used as a distance function. We explore several variants of this idea, based on using SVM and boosting algorithms as product space classifiers. Our main contribution is a distance learning method, which combines boosting hypotheses over the product space with a weak learner based on partitioning the original feature space. The weak learner used is a Gaussian mixture model computed using a constrained EM algorithm, where the constraints are equivalence constraints on pairs of data points. This approach allows us to incorporate unlabeled data into the training process. Using some benchmark databases from the UCI repository, we show that our margin based methods significantly outperform existing metric learning methods, which are based an learning a Mahalanobis distance. We then show comparative results of image retrieval in a distributed learning paradigm, using two databases: a large database of facial images (YaleB), and a database of natural images taken from a commercial CD. In both cases our GMM based boosting method outperforms all other methods, and its generalization to unseen classes is superior.


International Journal of Computer Vision | 1993

Model-based invariants for 3-D vision

Daphna Weinshall

Invariance under a group of 3-D transformations seems a desirable component of an efficient 3-D shape representation. We propose representations which are invariant under weak perspective to either rigid or linear 3-D transformations, and we show how they can be computed efficiently from a sequence of images with a linear and incremental algorithm. We show simulated results with perspective projection and noise, and the results of model acquisition from a real sequence of images. The use of linear computation, together with the integration through time of invariant representations, offers improved robustness and stability. Using these invariant representations, we derive model-based projective invariant functions of general 3-D objects. We discuss the use of the model-based invariants with existing recognition strategies: alignment without transformation, and constant time indexing from 2-D images of general 3-D objects.


international conference on machine learning | 2004

Boosting margin based distance functions for clustering

Tomer Hertz; Aharon Bar-Hillel; Daphna Weinshall

The performance of graph based clustering methods critically depends on the quality of the distance function used to compute similarities between pairs of neighboring nodes. In this paper we learn distance functions by training binary classifiers with margins. The classifiers are defined over the product space of pairs of points and are trained to distinguish whether two points come from the same class or not. The signed margin is used as the distance value. Our main contribution is a distance learning method (DistBoost), which combines boosting hypotheses over the product space with a weak learner based on partitioning the original feature space. Each weak hypothesis is a Gaussian mixture model computed using a semi-supervised constrained EM algorithm, which is trained using both unlabeled and labeled data. We also consider SVM and decision trees boosting as margin based classifiers in the product space. We experimentally compare the margin based distance functions with other existing metric learning methods, and with existing techniques for the direct incorporation of constraints into various clustering algorithms. Clustering performance is measured on some benchmark databases from the UCI repository, a sample from the MNIST database, and a data set of color images of animals. In most cases the DistBoost algorithm significantly and robustly outperformed its competitors.


international conference on computer vision | 2007

Exploiting Object Hierarchy: Combining Models from Different Category Levels

Alon Zweig; Daphna Weinshall

We investigated the computational properties of natural object hierarchy in the context of constellation object class models, and its utility for object class recognition. We first observed an interesting computational property of the object hierarchy: comparing the recognition rate when using models of objects at different levels, the higher more inclusive levels (e.g., closed-frame vehicles or vehicles) exhibit higher recall but lower precision when compared with the class specific level (e.g., bus). These inherent differences suggest that combining object classifiers from different hierarchical levels into a single classifier may improve classification, as it appears like these models capture different aspects of the object. We describe a method to combine these classifiers, and analyze the conditions under which improvement can be guaranteed. When given a small sample of a new object class, we describe a method to transfer knowledge across the tree hierarchy, between related objects. Finally, we describe extensive experiments using object hierarchies obtained from publicly available datasets, and show that the combined classifiers significantly improve recognition results.

Collaboration


Dive into the Daphna Weinshall's collaboration.

Top Co-Authors

Avatar

Michael Werman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Tomer Hertz

Fred Hutchinson Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Yoram Gdalyahu

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Aharon Bar-Hillel

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Doron Feldman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Noam Shental

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar

Shmuel Peleg

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Yehezkel S. Resheff

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shaul Hochstein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge