Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iacopo Masi is active.

Publication


Featured researches published by Iacopo Masi.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Person Re-Identification by Iterative Re-Weighted Sparse Ranking

Giuseppe Lisanti; Iacopo Masi; Andrew D. Bagdanov; Alberto Del Bimbo

In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.


european conference on computer vision | 2016

Do We Really Need to Collect Millions of Faces for Effective Face Recognition

Iacopo Masi; Anh Tuấn Trần; Tal Hassner; Jatuporn Toy Leksut; Gérard G. Medioni

Face recognition capabilities have recently made extraordinary leaps. Though this progress is at least partially due to ballooning training set sizes – huge numbers of face images downloaded and labeled for identity – it is not clear if the formidable task of collecting so many images is truly necessary. We propose a far more accessible means of increasing training data sizes for face recognition systems: Domain specific data augmentation. We describe novel methods of enriching an existing dataset with important facial appearance variations by manipulating the faces it contains. This synthesis is also used when matching query images represented by standard convolutional neural networks. The effect of training and testing with synthesized images is tested on the LFW and IJB-A (verification and identification) benchmarks and Janus CS2. The performances obtained by our approach match state of the art results reported by systems trained on millions of downloaded images.


international conference on distributed smart cameras | 2014

Matching People across Camera Views using Kernel Canonical Correlation Analysis

Giuseppe Lisanti; Iacopo Masi; Alberto Del Bimbo

Matching people across views is still an open problem in computer vision and in video surveillance systems. In this paper we address the problem of person re-identification across disjoint cameras by proposing an efficient but robust kernel descriptor to encode the appearance of a person. The matching is then improved by applying a learning technique based on Kernel Canonical Correlation Analysis (KCCA) which finds a common subspace between the proposed descriptors extracted from disjoint cameras, projecting them into a new description space. This common description space is then used to identify a person from one camera to another with a standard nearest-neighbor voting method. We evaluate our approach on two publicly available datasets for re-identification (VIPeR and PRID), demonstrating that our method yields state-of-the-art performance with respect to recent techniques proposed for the re-identification task.


computer vision and pattern recognition | 2016

Pose-Aware Face Recognition in the Wild

Iacopo Masi; Stephen Rawls; Gérard G. Medioni; Prem Natarajan

We propose a method to push the frontiers of unconstrained face recognition in the wild, focusing on the problem of extreme pose variations. As opposed to current techniques which either expect a single model to learn pose invariance through massive amounts of training data, or which normalize images to a single frontal pose, our method explicitly tackles pose variation by using multiple pose specific models and rendered face images. We leverage deep Convolutional Neural Networks (CNNs) to learn discriminative representations we call Pose-Aware Models (PAMs) using 500K images from the CASIA WebFace dataset. We present a comparative evaluation on the new IARPA Janus Benchmark A (IJB-A) and PIPA datasets. On these datasets PAMs achieve remarkably better performance than commercial products and surprisingly also outperform methods that are specifically fine-tuned on the target dataset.


computer vision and pattern recognition | 2017

Regressing Robust and Discriminative 3D Morphable Models with a Very Deep Neural Network

Anh Tuan Tran; Tal Hassner; Iacopo Masi; Gérard G. Medioni

The 3D shapes of faces are well known to be discriminative. Yet despite this, they are rarely used for face recognition and always under controlled viewing conditions. We claim that this is a symptom of a serious but often overlooked problem with existing methods for single view 3D face reconstruction: when applied in the wild, their 3D estimates are either unstable and change for different photos of the same subject or they are over-regularized and generic. In response, we describe a robust method for regressing discriminative 3D morphable face models (3DMM). We use a convolutional neural network (CNN) to regress 3DMM shape and texture parameters directly from an input photo. We overcome the shortage of training data required for this purpose by offering a method for generating huge numbers of labeled examples. The 3D estimates produced by our CNN surpass state of the art accuracy on the MICC data set. Coupled with a 3D-3D face matching pipeline, we show the first competitive face recognition results on the LFW, YTF and IJB-A benchmarks using 3D face shapes as representations, rather than the opaque deep feature vectors used by other modern systems.


workshop on applications of computer vision | 2016

Face recognition using deep multi-pose representations

Yue Wu; Stephen Rawls; Shai Harel; Tal Hassner; Iacopo Masi; Jongmoo Choi; Jatuporn Lekust; Jungyeon Kim; Prem Natarajan; Ram Nevatia; Gérard G. Medioni

We introduce our method and system for face recognition using multiple pose-aware deep learning models. In our representation, a face image is processed by several pose-specific deep convolutional neural network (CNN) models to generate multiple pose-specific features. 3D rendering is used to generate multiple face poses from the input image. Sensitivity of the recognition system to pose variations is reduced since we use an ensemble of pose-specific CNN features. The paper presents extensive experimental results on the effect of landmark detection, CNN layer selection and pose model selection on the performance of the recognition pipeline. Our novel representation achieves better results than the state-of-the-art on IARPAs CS2 and NISTs IJB-A in both verification and identification (i.e. search) tasks.


Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding | 2011

The florence 2D/3D hybrid face dataset

Andrew D. Bagdanov; Alberto Del Bimbo; Iacopo Masi

This article describes a new dataset under construction at the Media Integration and Communication Center and the University of Florence. The dataset consists of high-resolution 3D scans of human faces along with several video sequences of varying resolution and zoom level. Each subject is recorded under various scenarios, settings and conditions. This dataset is being constructed specifically to support research on techniques that bridge the gap between 2D, appearance-based recognition techniques, and fully 3D approaches. It is designed to simulate, in a controlled fashion, realistic surveillance conditions and to probe the efficacy of exploiting 3D models in real scenarios.


IEEE MultiMedia | 2012

Posterity Logging of Face Imagery for Video Surveillance

Andrew D. Bagdanov; A. Del Bimbo; Fabrizio Dini; Giuseppe Lisanti; Iacopo Masi

A real-time posterity logging system detects and tracks multiple targets in video streams, grabbing face images and retaining only the best quality for each detected target.


computer vision and pattern recognition | 2016

Pooling Faces: Template Based Face Recognition with Pooled Face Images

Tal Hassner; Iacopo Masi; Jungyeon Kim; Jongmoo Choi; Shai Harel; Prem Natarajan; Gérard G. Medioni

We propose a novel approach to template based face recognition. Our dual goal is to both increase recognition accuracy and reduce the computational and storage costs of template matching. To do this, we leverage on an approach which was proven effective in many other domains, but, to our knowledge, never fully explored for face images: average pooling of face photos. We show how (and why!) the space of a templates images can be partitioned and then pooled based on image quality and head pose and the effect this has on accuracy and template size. We perform extensive tests on the IJB-A and Janus CS2 template based face identification and verification benchmarks. These show that not only does our approach outperform published state of the art despite requiring far fewer cross template comparisons, but also, surprisingly, that image pooling performs on par with deep feature pooling.


computer vision and pattern recognition | 2013

Using 3D Models to Recognize 2D Faces in the Wild

Iacopo Masi; Giuseppe Lisanti; Andrew D. Bagdanov; Pietro Pala; Alberto Del Bimbo

In this paper we consider the problem of face recognition in imagery captured in uncooperative environments using PTZ cameras. For each subject enrolled in the gallery, we acquire a high-resolution 3D model from which we generate a series of rendered face images of varying viewpoint. The result of regularly sampling face pose for all subjects is a redundant basis that over represents each target. To recognize an unknown probe image, we perform a sparse reconstruction of SIFT features extracted from the probe using a basis of SIFT features from the gallery. While directly collecting images over varying pose for all enrolled subjects is prohibitive at enrollment, the use of high speed, 3D acquisition systems allows our face recognition system to quickly acquire a single model, and generate synthetic views offline. Finally we show, using two publicly available datasets, how our approach performs when using rendered gallery images to recognize 2D rendered probe images and 2D probe images acquired using PTZ cameras.

Collaboration


Dive into the Iacopo Masi's collaboration.

Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tal Hassner

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prem Natarajan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Feng-Ju Chang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jongmoo Choi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jungyeon Kim

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge