Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Luc Dugelay is active.

Publication


Featured researches published by Jean-Luc Dugelay.


Multimedia Tools and Applications | 2011

Bag of soft biometrics for person identification

Antitza Dantcheva; Carmelo Velardo; Angela D'Angelo; Jean-Luc Dugelay

In this work we seek to provide insight on the general topic of soft biometrics. We firstly present a new refined definition of soft biometrics, emphasizing on the aspect of human compliance, and then proceed to identify candidate traits that accept this novel definition. We then address relations between traits and discuss associated benefits and limitations of these traits. We also consider two novel soft biometric traits, namely weight and color of clothes and we analyze their reliability. Related promising results on the performance are provided. Finally, we consider a new application, namely human identification solely carried out by a bag of facial, body and accessory soft biometric traits, and as an evidence of its practicality, we provide preliminary promising results.


Face and Gesture 2011 | 2011

Improving the recognition of faces occluded by facial accessories

Rui Min; Abdenour Hadid; Jean-Luc Dugelay

Facial occlusions, due for example to sunglasses, hats, scarf, beards etc., can significantly affect the performance of any face recognition system. Unfortunately, the presence of facial occlusions is quite common in real-world applications especially when the individuals are not cooperative with the system such as in video surveillance scenarios. While there has been an enormous amount of research on face recognition under pose/illumination changes and image degradations, problems caused by occlusions are mostly overlooked. The focus of this paper is thus on facial occlusions, and particularly on how to improve the recognition of faces occluded by sunglasses and scarf. We propose an efficient approach which consists of first detecting the presence of scarf/sunglasses and then processing the non-occluded facial regions only. The occlusion detection problem is approached using Gabor wavelets, PCA and support vector machines (SVM), while the recognition of the non-occluded facial part is performed using block-based local binary patterns. Experiments on AR face database showed that the proposed method yields significant performance improvements compared to existing works for recognizing partially occluded and also non-occluded faces. Furthermore, the performance of the proposed approach is also assessed under illumination and extreme facial expression changes, demonstrating interesting results.


international conference on biometrics | 2012

Moving face spoofing detection via 3D projective invariants

Maria De Marsico; Michele Nappi; Daniel Riccio; Jean-Luc Dugelay

Face recognition provides many advantages compared with other available biometrics, but it is particularly subject to spoofing. The most accurate methods in literature addressing this problem, rely on the estimation of the three-dimensionality of faces, which heavily increase the whole cost of the system. This paper proposes an effective and efficient solution to problem of face spoofing. Starting from a set of automatically located facial points, we exploit geometric invariants for detecting replay attacks. The presented results demonstrate the effectiveness and efficiency of the proposed indices.


international conference on image processing | 2011

Semi-supervised face recognition with LDA self-training

Xuran Zhao; Nicholas W. D. Evans; Jean-Luc Dugelay

Face recognition algorithms based on linear discriminant analysis (LDA) generally give satisfactory performance but tend to require a relatively high number of samples in order to learn reliable projections. In many practical applications of face recognition there is only a small number of labelled face images and in this case LDA-based algorithms generally lead to poor performance. The contributions in this paper relate to a new semi-supervised, self-training LDA-based algorithm which is used to augment a manually labelled training set with new data from an unlabelled, auxiliary set and hence to improve recognition performance. Without the cost of manual labelling such auxiliary data is often easily acquired but is not normally useful for learning. We report face recognition experiments on 3 independent databases which demonstrate a constant improvement of our baseline, supervised LDA system. The performance of our algorithm is also shown to significantly outperform other semi-supervised learning algorithms.


international conference on image processing | 2007

Video Face Recognition: A Physiological and Behavioural Multimodal Approach

Federico Matta; Jean-Luc Dugelay

In this article we present a multimodal system to person recognition by integrating two complementary approaches that work with video data. The first module exploits the behavioural information: it is based on statistical features computed using the displacement signals of a head; the second one is dealing with the physiological information: it is a probabilistic extension of the classic Eigenface approach. For a consistent fusion, both systems share the same probabilistic classification framework: a Gaussian mixture model (GMM) approximation and a Bayesian classifier. We assess the performances of the multimodal system by implementing two fusion strategies and we analyse their evolution in presence of artificial noise.


Proceedings of SPIE | 2011

Automatic extraction of facial interest points based on 2D and 3D data

Nesli Erdogmus; Jean-Luc Dugelay

Facial feature points are one of the most important clues for many computer vision applications such as face normalization, registration and model-based human face coding. Hence, automating the extraction of these points would have a wide range of usage. In this paper, we aim to detect a subset of Facial Definition Parameters (FDPs) defined in MPEG-4 automatically by utilizing both 2D and 3D face data. The main assumption in this work is that the 2D images and the corresponding 3D scans are taken for frontal faces with neutral expressions. This limitation is realistic with respect to our scenario, in which the enrollment is done in a controlled environment and the detected FDP points are to be used for the warping and animation of the enrolled faces [1] where the choice of MPEG-4 FDP is justified. For the extraction of the points, 2D, 3D data or both is used according to the distinctive information they carry in that particular facial region. As a result, total number of 29 interest points is detected. The method is tested on the neutral set of Bosphorus database that includes 105 subjects with registered 3D scans and color images.


international conference on acoustics, speech, and signal processing | 2008

Tomofaces: Eigenfaces extended to videos of speakers

Federico Matta; Jean-Luc Dugelay

In this article we propose a novel spatio-temporal approach for person recognition using video information. By applying discrete video tomography, our algorithm summarises the head and facial dynamics of a sequence into a single image (called video X-ray image), which is subsequently analysed by an extended version of the eigenface approach. In the experimental part, we assess the discriminative power of our system and we compare it with an analogous one working on traditional facial appearance. Finally, we integrate the X-ray information with appearance in a multimodal system, which improves the recognition rates of standalone frameworks.


international conference on image processing | 2001

Eye state tracking for face cloning

A.C.A. dal Valle; Jean-Luc Dugelay

This article presents an efficient approach to eye movement estimation by combining color and energy based image analysis algorithms. The movement is first analyzed and then described in terms of action units. A temporal state diagram is used to control the behavior of the analysis over time so that the movements of the eye can be synthesized from the former description, after translating them into face animation parameters.


acm multimedia | 2010

BIOFACE: a biometric face demonstrator

Mourad Ouaret; Antitza Dantcheva; Rui Min; Lionel Daniel; Jean-Luc Dugelay

In this paper, a demonstrator called BIOFACE incorporating several facial biometric techniques is described. It includes the well established Eigenfaces and the recently published Tomofaces techniques, which perform face recognition based on facial appearance and dynamics, respectively. Both techniques are based on the space dimensionality reduction and the enrollment requires the projection of several positive face samples to the reduced space. Alternatively, BIOFACE also performs face recognition based on the matching of Scale Invariant Feature Transform (SIFT) features.n Moreover, BIOFACE extracts a facial soft biometric profile, which consists of a bag of facial soft biometric traits such as skin, hair, and eye color, the presence of glasses, beard and moustache. The fast and efficient detection of the facial soft biometrics is performed as a pre-processing step, and employed for pruning the search for the facial recognition module.n Finally, the demonstrator also detects facial events such as blinking, yawning and looking-away. The car driver scenario is a good example to exhibit the importance of such traits to detect fatigue.n The BIOFACE demonstrator is an attempt to show the potential and the performance of such facial processing techniques in a real-life scenario. The demonstrator is built using the C/C++ programming language, which is suitable for implementing image and video processing techniques due to its fast execution. On top of that, the Open Source Computer Vision Library (OpenCV), which is optimized for Intel processors, is used to implement the image processing algorithms.


Proceedings of SPIE | 2010

Realistic and Animatable Face Models for Expression Simulations in 3D

Nesli Erdogmus; Remy Etheve; Jean-Luc Dugelay

In the face recognition problem, one of the most critical sources of variation is facial expression. This paper presents a system to overcome this issue by utilizing facial expression simulations on realistic and animatable face models that are in compliance with MPEG-4 specifications. In our system, firstly, 3D frontal face scans of the users in neutral expression and with closed mouth are taken for onetime enrollment. Those rigid face models are then converted into animatable models by warping a generic animatable model using Thin Plate Spline method. The warping is based on the facial feature points and both 2D color and 3D shape data are exploited for the automation of their extraction. The obtained models of the users can be animated by using a facial animation engine. This new attribution helps us to bring our whole database in the same expression state detected in a test image for better recognition results, since the disadvantage of expression variations is eliminated.

Collaboration


Dive into the Jean-Luc Dugelay's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge