Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hassen Drira is active.

Publication


Featured researches published by Hassen Drira.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

3D Face Recognition under Expressions, Occlusions, and Pose Variations

Hassen Drira; Boulbaba Ben Amor; Anuj Srivastava; Mohamed Daoudi; Rim Slama

We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both-empirical and theoretical-perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.


british machine vision conference | 2010

Pose and Expression-Invariant 3D Face Recognition using Elastic Radial Curves

Hassen Drira; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava

In this paper we explore the use of shapes of elastic radial curves to model 3D facial deformations, caused by changes in facial expressions. We represent facial surfaces by indexed collections of radial curves on them, emanating from the nose tips, and compare the facial shapes by comparing the shapes of their corresponding curves. Using a past approach on elastic shape analysis of curves, we obtain an algorithm for comparing facial surfaces. We also introduce a quality control module which allows our approach to be robust to pose variation and missing data. Comparative evaluation using a common experimental setup on GAVAB dataset, considered as the most expression-rich and noise-prone 3D face dataset, shows that our approach outperforms other state-of-the-art approaches.


international conference on computer vision | 2009

A Riemannian analysis of 3D nose shapes for partial human biometrics

Hassen Drira; Boulbaba Ben Amor; Anuj Srivastava; Mohamed Daoudi

In this paper we explore the use of shapes of noses for performing partial human biometrics. The basic idea is to represent nasal surfaces using indexed collections of iso-curves, and to analyze shapes of noses by comparing their corresponding curves. We extend past work in Riemannian analysis of shapes of closed curves in ℝ3 to obtain a similar Riemannian analysis for nasal surfaces. In particular, we obtain algorithms for computing geodesics, computing statistical means, and stochastic clustering. We demonstrate these ideas in two application contexts : authentication and identification. We evaluate performances on a large database involving 2000 scans from FRGC v2 database, and present a hierarchical organization of nose databases to allow for efficient searches.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

4-D Facial Expression Recognition by Learning Geometric Deformations

Boulbaba Ben Amor; Hassen Drira; Stefano Berretti; Mohamed Daoudi; Anuj Srivastava

In this paper, we present an automatic approach for facial expression recognition from 3-D video sequences. In the proposed solution, the 3-D faces are represented by collections of radial curves and a Riemannian shape analysis is applied to effectively quantify the deformations induced by the facial expressions in a given subsequence of 3-D frames. This is obtained from the dense scalar field, which denotes the shooting directions of the geodesic paths constructed between pairs of corresponding radial curves of two faces. As the resulting dense scalar fields show a high dimensionality, Linear Discriminant Analysis (LDA) transformation is applied to the dense feature space. Two methods are then used for classification: 1) 3-D motion extraction with temporal Hidden Markov model (HMM) and 2) mean deformation capturing with random forest. While a dynamic HMM on the features is trained in the first approach, the second one computes mean deformations under a window and applies multiclass random forest. Both of the proposed classification schemes on the scalar fields showed comparable results and outperformed earlier studies on facial expression recognition from 3-D video sequences.


Pattern Recognition | 2015

Combining face averageness and symmetry for 3D-based gender classification

Baiqiang Xia; Boulbaba Ben Amor; Hassen Drira; Mohamed Daoudi; Lahoucine Ballihi

Although human face averageness and symmetry are valuable clues in social perception (such as attractiveness, masculinity/femininity, and healthy/ sick), in the literature of facial attribute recognition, little consideration has been given to them. In this work, we propose to study the morphological differences between male and female faces by analyzing the averageness and symmetry of their 3D shapes. In particular, we address the following questions: (i) is there any relationship between gender and face averageness/symmetry? and (ii) if this relationship exists, which specific areas on the face are involved? To this end, we propose first to capture densely both the face shape averageness (AVE) and symmetry (SYM) using our Dense Scalar Field (DSF), which denotes the shooting directions of geodesics between facial shapes. Then, we explore such representations by using classical machine learning techniques, the Feature Selection (FS) methods and Random Forest (RF) classification algorithm. Experiments conducted on the FRGCv2 dataset show that a significant relationship exists between gender and facial averageness/symmetry when achieving a classification rate of 93.7% on the 466 earliest scans of subjects (mainly neutral) and 92.4% on the whole FRGCv2 dataset (including facial expressions). HighlightsNew Dense Scalar Fields grounding on Riemannian Geometry for 3D facial shape analysis.New averageness and symmetry descriptors for gender classification.Combining averageness and symmetry for better gender classification.Competitive classification results with state-of-the-art.


international conference on biometrics | 2009

Nasal Region Contribution in 3D Face Biometrics Using Shape Analysis Framework

Hassen Drira; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava

The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in presence of varying facial expressions using the nose region. This approach consists of the following two main steps: (i) Each nasal surface is automatically denoised and preprocessed to result in an indexed collection of nasal curves. During this step one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired nasal curves. (ii) Comparisons between noses are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding nasal curves across surfaces under an elastic metric. The experimental results, generated using a subset of FRGC v2 dataset, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same dataset.


ieee international conference on automatic face gesture recognition | 2015

Human-object interaction recognition by learning the distances between the object and the skeleton joints

Meng Meng; Hassen Drira; Mohamed Daoudi; Jacques Boonaert

In this paper we present a fully automatic approach for human-object interaction recognition from depth sensors. Towards that goal, we extract relevant frame-level features such as inter-joint distances and joint-object distances that are suitable for real time action recognition. These features are insensitive to position and pose variation. Experiments conducted on ORGBD dataset following state-of-the-art settings show the effectiveness of the proposed approach.


Annales Des Télécommunications | 2009

An experimental illustration of 3D facial shape analysis under facial expressions

Boulbaba Ben Amor; Hassen Drira; Lahoucine Ballihi; Anuj Srivastava; Mohamed Daoudi

The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in the presence of varying facial expressions. This approach consists of the following two main steps: (1) Each facial surface is automatically denoised and preprocessed to result in an indexed collection of facial curves. During this step, one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired facial curves. (2) Comparisons between faces are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding facial curves across surfaces under an elastic metric. The experimental results, generated using a subset of the Face Recognition Grand Challenge v2 data set, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same data set.


Multimedia Tools and Applications | 2017

Combining shape analysis and texture pattern for palmprint identification

Raouia Mokni; Hassen Drira; Monji Kherallah

We propose an efficient method for principal line extraction from the palmprint and geometric framework for analyzing their shapes. This representation, along with the elastic Riemannian metric, seems natural for measuring principal line deformations and is robust to challenges such as orientation variation and re-parameterization due to pose variation and missing part, respectively. The palmprint texture is investigated using the fractal analysis; thus the resulting features are fused with the principal line features. This framework is shown to be promising from both – empirical and theoretical – perspectives. In terms of empirical evaluation, our results match or improve the state-of-the-art methods on three prominent palmprint datasets: PolyU, CASIA, and IIT-Delhi, each posing a different type of challenge. From a theoretical perspective, this framework allows fusing texture analysis and shape analysis.


ieee international conference on automatic face gesture recognition | 2013

Gender and 3D facial symmetry: What's the relationship?

Baiqiang Xia; Boulbaba Ben Amor; Hassen Drira; Mohamed Daoudi; Lahoucine Ballihi

Although it is valuable information that human faces are approximately symmetric, in the literature of facial attributes recognition, little consideration has been given to the relationship between gender, age, ethnicity, etc. and facial asymmetry. In this paper we present a new approach based on bilateral facial asymmetry for gender classification. For that purpose, we propose to first capture the facial asymmetry by using Deformation Scalar Field (DSF) applied on each 3D face, then train such representations (DSFs) with several classifiers, including Random Forest, Adaboost and SVM after PCA-based feature space transformation. Experiments conducted on FRGCv2 dataset showed that a significant relationship exists between gender and facial symmetry when achieving a 90.99% correct classification rate for the 466 earliest scans of subjects (mainly neutral) and 88.12% on the whole FRGCv2 dataset (including facial expressions).

Collaboration


Dive into the Hassen Drira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meng Meng

Institut Mines-Télécom

View shared research outputs
Researchain Logo
Decentralizing Knowledge