Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Boulbaba Ben Amor is active.

Publication


Featured researches published by Boulbaba Ben Amor.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

3D Face Recognition under Expressions, Occlusions, and Pose Variations

Hassen Drira; Boulbaba Ben Amor; Anuj Srivastava; Mohamed Daoudi; Rim Slama

We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both-empirical and theoretical-perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.


international conference on pattern recognition | 2010

A Set of Selected SIFT Features for 3D Facial Expression Recognition

Stefano Berretti; Alberto Del Bimbo; Pietro Pala; Boulbaba Ben Amor; Mohamed Daoudi

In this paper, the problem of person-independent facial expression recognition is addressed on 3D shapes. To this end, an original approach is proposed that computes SIFT descriptors on a set of facial landmarks of depth images, and then selects the subset of most relevant features. Using SVM classification of the selected features, an average recognition rate of 77.5% on the BU-3DFE database has been obtained. Comparative evaluation on a common experimental setup, shows that our solution is able to obtain state of the art results.


The Visual Computer | 2011

3D facial expression recognition using SIFT descriptors of automatically detected keypoints

Stefano Berretti; Boulbaba Ben Amor; Mohamed Daoudi; Alberto Del Bimbo

Methods to recognize humans’ facial expressions have been proposed mainly focusing on 2D still images and videos. In this paper, the problem of person-independent facial expression recognition is addressed using the 3D geometry information extracted from the 3D shape of the face. To this end, a completely automatic approach is proposed that relies on identifying a set of facial keypoints, computing SIFT feature descriptors of depth images of the face around sample points defined starting from the facial keypoints, and selecting the subset of features with maximum relevance. Training a Support Vector Machine (SVM) for each facial expression to be recognized, and combining them to form a multi-class classifier, an average recognition rate of 78.43% on the BU-3DFE database has been obtained. Comparison with competitor approaches using a common experimental setting on the BU-3DFE database shows that our solution is capable of obtaining state of the art results. The same 3D face representation framework and testing database have been also used to perform 3D facial expression retrieval (i.e., retrieve 3D scans with the same facial expression as shown by a target subject), with results proving the viability of the proposed solution.


Pattern Recognition | 2011

Shape analysis of local facial patches for 3D facial expression recognition

Ahmed Maalej; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava; Stefano Berretti

In this paper we address the problem of 3D facial expression recognition. We propose a local geometric shape analysis of facial surfaces coupled with machine learning techniques for expression classification. A computation of the length of the geodesic path between corresponding patches, using a Riemannian framework, in a shape space provides a quantitative information about their similarities. These measures are then used as inputs to several classification methods. The experimental results demonstrate the effectiveness of the proposed approach. Using multiboosting and support vector machines (SVM) classifiers, we achieved 98.81% and 97.75% recognition average rates, respectively, for recognition of the six prototypical facial expressions on BU-3DFE database. A comparative study using the same experimental setting shows that the suggested approach outperforms previous work.


IEEE Transactions on Information Forensics and Security | 2012

Boosting 3-D-Geometric Features for Efficient Face Recognition and Gender Classification

Lahoucine Ballihi; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava; Driss Aboutajdine

We utilize ideas from two growing but disparate ideas in computer vision-shape analysis using tools from differential geometry and feature selection using machine learning-to select and highlight salient geometrical facial features that contribute most in 3-D face recognition and gender classification. First, a large set of geometries curve features are extracted using level sets (circular curves) and streamlines (radial curves) of the Euclidean distance functions of the facial surface; together they approximate facial surfaces with arbitrarily high accuracy. Then, we use the well-known Adaboost algorithm for feature selection from this large set and derive a composite classifier that achieves high performance with a minimal set of features. This greatly reduced set, consisting of some level curves on the nose and some radial curves in the forehead and cheeks regions, provides a very compact signature of a 3-D face and a fast classification algorithm for face recognition and gender selection. It is also efficient in terms of data storage and transmission costs. Experimental results, carried out using the FRGCv2 dataset, yield a rank-1 face recognition rate of 98% and a gender classification rate of 86% rate.


international conference on pattern recognition | 2006

New Experiments on ICP-Based 3D Face Recognition and Authentication

Boulbaba Ben Amor; Mohsen Ardabilian; Liming Chen

In this paper, we discuss new experiments on face recognition and authentication based on dimensional surface matching. While most of existing methods use facial intensity images, a newest ones focus on introducing depth information to surmount some of classical face recognition problems such as pose, illumination, and facial expression variations. The presented matching algorithm is based on ICP (iterative closest point) that provides perfectly the posture of presented probe. In addition, the similarity metric is given by spatial deviation between the overlapped parts in matched surfaces. The general paradigm consists in building a full 3D face gallery using a laser-based scanner (the off-line phase). At the on-line phase, identification or verification, only one captured 2.5D face model is performed with the whole set of 3D faces from the gallery or compared to the 3D face model of the genuine, respectively. This probe model can be acquired from arbitrary viewpoint, with arbitrary facial expressions, and under arbitrary lighting conditions. A new multi-view registered 3D face database, including these variations, is developed within BioSecure Workshop 2005 in order to perform significant experiments


british machine vision conference | 2010

Pose and Expression-Invariant 3D Face Recognition using Elastic Radial Curves

Hassen Drira; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava

In this paper we explore the use of shapes of elastic radial curves to model 3D facial deformations, caused by changes in facial expressions. We represent facial surfaces by indexed collections of radial curves on them, emanating from the nose tips, and compare the facial shapes by comparing the shapes of their corresponding curves. Using a past approach on elastic shape analysis of curves, we obtain an algorithm for comparing facial surfaces. We also introduce a quality control module which allows our approach to be robust to pose variation and missing data. Comparative evaluation using a common experimental setup on GAVAB dataset, considered as the most expression-rich and noise-prone 3D face dataset, shows that our approach outperforms other state-of-the-art approaches.


international conference on computer vision | 2009

A Riemannian analysis of 3D nose shapes for partial human biometrics

Hassen Drira; Boulbaba Ben Amor; Anuj Srivastava; Mohamed Daoudi

In this paper we explore the use of shapes of noses for performing partial human biometrics. The basic idea is to represent nasal surfaces using indexed collections of iso-curves, and to analyze shapes of noses by comparing their corresponding curves. We extend past work in Riemannian analysis of shapes of closed curves in ℝ3 to obtain a similar Riemannian analysis for nasal surfaces. In particular, we obtain algorithms for computing geodesics, computing statistical means, and stochastic clustering. We demonstrate these ideas in two application contexts : authentication and identification. We evaluate performances on a large database involving 2000 scans from FRGC v2 database, and present a hierarchical organization of nose databases to allow for efficient searches.


international conference on pattern recognition | 2010

Local 3D Shape Analysis for Facial Expression Recognition

Ahmed Maalej; Boulbaba Ben Amor; Mohamed Daoudi; Anuj Srivastava; Stefano Berretti

We investigate the problem of facial expression recognition using 3D face data. Our approach is based on local shape analysis of several relevant regions of a given face scan. These regions or patches from facial surfaces are extracted and represented by sets of closed curves. A Riemannian framework is used to derive the shape analysis of the extracted patches. The applied framework permits to calculate a similarity (or dissimilarity) distances between patches, and to compute the optimal deformation between them. Once calculated, these measures are employed as inputs to a commonly used classification techniques such as AdaBoost and Support Vector Machines (SVM). A quantitative evaluation of our novel approach is conducted on a subset of the publicly available BU-3DFE database.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

4-D Facial Expression Recognition by Learning Geometric Deformations

Boulbaba Ben Amor; Hassen Drira; Stefano Berretti; Mohamed Daoudi; Anuj Srivastava

In this paper, we present an automatic approach for facial expression recognition from 3-D video sequences. In the proposed solution, the 3-D faces are represented by collections of radial curves and a Riemannian shape analysis is applied to effectively quantify the deformations induced by the facial expressions in a given subsequence of 3-D frames. This is obtained from the dense scalar field, which denotes the shooting directions of the geodesic paths constructed between pairs of corresponding radial curves of two faces. As the resulting dense scalar fields show a high dimensionality, Linear Discriminant Analysis (LDA) transformation is applied to the dense feature space. Two methods are then used for classification: 1) 3-D motion extraction with temporal Hidden Markov model (HMM) and 2) mean deformation capturing with random forest. While a dynamic HMM on the features is trained in the first approach, the second one computes mean deformations under a window and applies multiclass random forest. Both of the proposed classification schemes on the scalar fields showed comparable results and outperformed earlier studies on facial expression recognition from 3-D video sequences.

Collaboration


Dive into the Boulbaba Ben Amor's collaboration.

Top Co-Authors

Avatar

Hassen Drira

Institut Mines-Télécom

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liming Chen

École centrale de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lahoucine Ballihi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karima Ouji

École centrale de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge