Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tal Hassner is active.

Publication


Featured researches published by Tal Hassner.


computer vision and pattern recognition | 2011

Face recognition in unconstrained videos with matched background similarity

Lior Wolf; Tal Hassner; Itay Maoz

Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., ‘in the wild’), the ‘YouTube Faces’ database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests.


asian conference on computer vision | 2009

Similarity scores based on background samples

Lior Wolf; Tal Hassner; Yaniv Taigman

Evaluating the similarity of images and their descriptors by employing discriminative learners has proven itself to be an effective face recognition paradigm. In this paper we show how “background samples”, that is, examples which do not belong to any of the classes being learned, may provide a significant performance boost to such face recognition systems. In particular, we make the following contributions. First, we define and evaluate the “Two-Shot Similarity” (TSS) score as an extension to the recently proposed “One-Shot Similarity” (OSS) measure. Both these measures utilize background samples to facilitate better recognition rates. Second, we examine the ranking of images most similar to a query image and employ these as a descriptor for that image. Finally, we provide results underscoring the importance of proper face alignment in automatic face recognition systems. These contributions in concert allow us to obtain a success rate of 86.83% on the Labeled Faces in the Wild (LFW) benchmark, outperforming current state-of-the-art results.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Effective Unconstrained Face Recognition by Combining Multiple Descriptors and Learned Background Statistics

Lior Wolf; Tal Hassner; Yaniv Taigman

Computer vision systems have demonstrated considerable improvement in recognizing and verifying faces in digital images. Still, recognizing faces appearing in unconstrained, natural conditions remains a challenging task. In this paper, we present a face-image, pair-matching approach primarily developed and tested on the “Labeled Faces in the Wild” (LFW) benchmark that reflects the challenges of face recognition from unconstrained images. The approach we propose makes the following contributions. 1) We present a family of novel face-image descriptors designed to capture statistics of local patch similarities. 2) We demonstrate how unlabeled background samples may be used to better evaluate image similarities. To this end, we describe a number of novel, effective similarity measures. 3) We show how labeled background samples, when available, may further improve classification performance, by employing a unique pair-matching pipeline. We present state-of-the-art results on the LFW pair-matching benchmarks. In addition, we show our system to be well suited for multilabel face classification (recognition) problem, on both the LFW images and on images from the laboratory controlled multi-PIE database.


computer vision and pattern recognition | 2015

Effective face frontalization in unconstrained images

Tal Hassner; Shai Harel; Eran Paz; Roee Enbar

“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.


british machine vision conference | 2009

Multiple One-Shots for Utilizing Class Label Information.

Yaniv Taigman; Lior Wolf; Tal Hassner

The One-Shot Similarity (OSS) kernel [3, 4] has recently been introduced as a means of boosting the performance of face recognition systems. Given two vectors, their One-Shot Similarity score (Fig. 1) reflects the likelihood of each vector belonging to the same class as the other vector and not in a class defined by a fixed set of “negative” examples. In this paper we explore how the One-Shot Similarity may nevertheless benefit from the availability of such labels. (a) we present a system utilizing identity and pose information to improve facial image pair-matching performance using multiple One-Shot scores; (b) we show how separating pose and identity may lead to better face recognition rates in unconstrained, “wild” facial images; (c) we explore how far we can get using a single descriptor with different similarity tests as opposed to the popular multiple descriptor approaches; and (d) we demonstrate the benefit of learned metrics for improved One-Shot performance.


IEEE Transactions on Information Forensics and Security | 2014

Age and Gender Estimation of Unfiltered Faces

Eran Eidinger; Roee Enbar; Tal Hassner

This paper concerns the estimation of facial attributes-namely, age and gender-from images of faces acquired in challenging, in the wild conditions. This problem has received far less attention than the related problem of face recognition, and in particular, has not enjoyed the same dramatic improvement in capabilities demonstrated by contemporary face recognition systems. Here, we address this problem by making the following contributions. First, in answer to one of the key problems of age estimation research-absence of data-we offer a unique data set of face images, labeled for age and gender, acquired by smart-phones and other mobile devices, and uploaded without manual filtering to online image repositories. We show the images in our collection to be more challenging than those offered by other face-photo benchmarks. Second, we describe the dropout-support vector machine approach used by our system for face attribute estimation, in order to avoid over-fitting. This method, inspired by the dropout learning techniques now popular with deep belief networks, is applied here for training support vector machines, to the best of our knowledge, for the first time. Finally, we present a robust face alignment technique, which explicitly considers the uncertainties of facial feature detectors. We report extensive tests analyzing both the difficulty levels of contemporary benchmarks as well as the capabilities of our own system. These show our method to outperform state-of-the-art by a wide margin.


computer vision and pattern recognition | 2012

Violent flows: Real-time detection of violent crowd behavior

Tal Hassner; Yossi Itcher; Orit Kliper-Gross

Although surveillance video cameras are now widely used, their effectiveness is questionable. Here, we focus on the challenging task of monitoring crowded events for outbreaks of violence. Such scenes require a human surveyor to monitor multiple video screens, presenting crowds of people in a constantly changing sea of activity, and to identify signs of breaking violence early enough to alert help. With this in mind, we propose the following contributions: (1) We describe a novel approach to real-time detection of breaking violence in crowded scenes. Our method considers statistics of how flow-vector magnitudes change over time. These statistics, collected for short frame sequences, are represented using the VIolent Flows (ViF) descriptor. ViF descriptors are then classified as either violent or non-violent using linear SVM. (2) We present a unique data set of real-world surveillance videos, along with standard benchmarks designed to test both violent/non-violent classification, as well as real-time detection accuracy. Finally, (3) we provide empirical tests, comparing our method to state-of-the-art techniques, and demonstrating its effectiveness.


international conference on computer vision | 2009

The One-Shot similarity kernel

Lior Wolf; Tal Hassner; Yaniv Taigman

The One-Shot similarity measure has recently been introduced in the context of face recognition where it was used to produce state-of-the-art results. Given two vectors, their One-Shot similarity score reflects the likelihood of each vector belonging in the same class as the other vector and not in a class defined by a fixed set of “negative” examples. The potential of this approach has thus far been largely unexplored. In this paper we analyze the One-Shot score and show that: (1) when using a version of LDA as the underlying classifier, this score is a Conditionally Positive Definite kernel and may be used within kernel-methods (e.g., SVM), (2) it can be efficiently computed, and (3) that it is effective as an underlying mechanism for image representation. We further demonstrate the effectiveness of the One-Shot similarity score in a number of applications including multiclass identification and descriptor generation.


computer vision and pattern recognition | 2006

Example Based 3D Reconstruction from Single 2D Images

Tal Hassner; Ronen Basri

We present a novel solution to the problem of depth reconstruction from a single image. Single view 3D reconstruction is an ill-posed problem. We address this problem by using an example-based synthesis approach. Our method uses a database of objects from a single class (e.g. hands, human figures) containing example patches of feasible mappings from the appearance to the depth of each object. Given an image of a novel object, we combine the known depths of patches from similar objects to produce a plausible depth estimate. This is achieved by optimizing a global target function representing the likelihood of the candidate depth. We demonstrate how the variability of 3D shapes and their poses can be handled by updating the example database on-the-fly. In addition, we show how we can employ our method for the novel task of recovering an estimate for the occluded backside of the imaged objects. Finally, we present results on a variety of object classes and a range of imaging conditions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

The Action Similarity Labeling Challenge

Orit Kliper-Gross; Tal Hassner; Lior Wolf

Recognizing actions in videos is rapidly becoming a topic of much research. To facilitate the development of methods for action recognition, several video collections, along with benchmark protocols, have previously been proposed. In this paper, we present a novel video database, the “Action Similarity LAbeliNg” (ASLAN) database, along with benchmark protocols. The ASLAN set includes thousands of videos collected from the web, in over 400 complex action classes. Our benchmark protocols focus on action similarity (same/not-same), rather than action classification, and testing is performed on never-before-seen actions. We propose this data set and benchmark as a means for gaining a more principled understanding of what makes actions different or similar, rather than learning the properties of particular action classes. We present baseline results on our benchmark, and compare them to human performance. To promote further study of action similarity techniques, we make the ASLAN database, benchmarks, and descriptor encodings publicly available to the research community.

Collaboration


Dive into the Tal Hassner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronen Basri

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Iacopo Masi

University of Florence

View shared research outputs
Top Co-Authors

Avatar

Lihi Zelnik-Manor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Feng-Ju Chang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Gil Levi

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar

Shai Harel

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar

Viki Mayzels

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge