Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Daoudi is active.

Publication


Featured researches published by Mohamed Daoudi.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold

Maxime Devanne; Hazem Wannous; Stefano Berretti; Pietro Pala; Mohamed Daoudi; Alberto Del Bimbo

Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported.


Computer Graphics Forum | 2009

Partial 3D Shape Retrieval by Reeb Pattern Unfolding

Julien Tierny; Jean-Philippe Vandeborre; Mohamed Daoudi

This paper presents a novel approach for fast and efficient partial shape retrieval on a collection of 3D shapes. Each shape is represented by a Reeb graph associated with geometrical signatures. Partial similarity between two shapes is evaluated by computing a variant of their maximum common sub‐graph.


Pattern Recognition | 2015

Accurate 3D action recognition using learning on the Grassmann manifold

Rim Slama; Hazem Wannous; Mohamed Daoudi; Anuj Srivastava

In this paper we address the problem of modeling and analyzing human motion by focusing on 3D body skeletons. Particularly, our intent is to represent skeletal motion in a geometric and efficient way, leading to an accurate action-recognition system. Here an action is represented by a dynamical system whose observability matrix is characterized as an element of a Grassmann manifold. To formulate our learning algorithm, we propose two distinct ideas: (1) in the first one we perform classification using a Truncated Wrapped Gaussian model, one for each class in its own tangent space. (2) In the second one we propose a novel learning algorithm that uses a vector representation formed by concatenating local coordinates in tangent spaces associated with different classes and training a linear SVM. We evaluate our approaches on three public 3D action datasets: MSR-action 3D, UT-kinect and UCF-kinect datasets; these datasets represent different kinds of challenges and together help provide an exhaustive evaluation. The results show that our approaches either match or exceed state-of-the-art performance reaching 91.21% on MSR-action 3D, 97.91% on UCF-kinect, and 88.5% on UT-kinect. Finally, we evaluate the latency, i.e. the ability to recognize an action before its termination, of our approach and demonstrate improvements relative to other published approaches. HighlightsA human action recognition approach which represents skeletal sequence as point on the Grassmann manifold.A new learning algorithm is introduced for learning human actions.Experiments are performed on three public datasets.Promising success rates are achieved, showing accuracy and better latency performances.


international conference on image analysis and processing | 2013

Space-Time Pose Representation for 3D Human Action Recognition

Maxime Devanne; Hazem Wannous; Stefano Berretti; Pietro Pala; Mohamed Daoudi; Alberto Del Bimbo

3D human action recognition is an important current challenge at the heart of many research areas lying to the modeling of the spatio-temporal information. In this paper, we propose representing human actions using spatio-temporal motion trajectories. In the proposed approach, each trajectory consists of one motion channel corresponding to the evolution of the 3D position of all joint coordinates within frames of action sequence. Action recognition is achieved through a shape trajectory representation that is learnt by a K-NN classifier, which takes benefit from Riemannian geometry in an open curve shape space. Experiments on the MSR Action 3D and UTKinect human action datasets show that, in comparison to state-of-the-art methods, the proposed approach obtains promising results that show the potential of our approach.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

4-D Facial Expression Recognition by Learning Geometric Deformations

Boulbaba Ben Amor; Hassen Drira; Stefano Berretti; Mohamed Daoudi; Anuj Srivastava

In this paper, we present an automatic approach for facial expression recognition from 3-D video sequences. In the proposed solution, the 3-D faces are represented by collections of radial curves and a Riemannian shape analysis is applied to effectively quantify the deformations induced by the facial expressions in a given subsequence of 3-D frames. This is obtained from the dense scalar field, which denotes the shooting directions of the geodesic paths constructed between pairs of corresponding radial curves of two faces. As the resulting dense scalar fields show a high dimensionality, Linear Discriminant Analysis (LDA) transformation is applied to the dense feature space. Two methods are then used for classification: 1) 3-D motion extraction with temporal Hidden Markov model (HMM) and 2) mean deformation capturing with random forest. While a dynamic HMM on the features is trained in the first approach, the second one computes mean deformations under a window and applies multiclass random forest. Both of the proposed classification schemes on the scalar fields showed comparable results and outperformed earlier studies on facial expression recognition from 3-D video sequences.


international conference on pattern recognition | 2008

Fast and precise kinematic skeleton extraction of 3D dynamic meshes

Julien Tierny; Jean-Philippe Vandeborre; Mohamed Daoudi

Shape skeleton extraction is a fundamental pre-processing task in shape-based pattern recognition. This paper presents a new algorithm for fast and precise extraction of kinematic skeletons of 3D dynamic surface meshes. Unlike previous approaches, surface motions are characterized by the mesh edge-length deviation induced by its transformation through time. Then a static skeleton extraction algorithm based on Reeb graphs exploits this latter information to extract the kinematic skeleton. This hybrid static and dynamic shape analysis enables the precise detection of objects¿ articulations as well as shape topological transitions corresponding to possibly-articulated immobile objects¿ features. Experiments show that the proposed algorithm is faster than previous techniques and still achieves better accuracy.


Pattern Recognition | 2015

Combining face averageness and symmetry for 3D-based gender classification

Baiqiang Xia; Boulbaba Ben Amor; Hassen Drira; Mohamed Daoudi; Lahoucine Ballihi

Although human face averageness and symmetry are valuable clues in social perception (such as attractiveness, masculinity/femininity, and healthy/ sick), in the literature of facial attribute recognition, little consideration has been given to them. In this work, we propose to study the morphological differences between male and female faces by analyzing the averageness and symmetry of their 3D shapes. In particular, we address the following questions: (i) is there any relationship between gender and face averageness/symmetry? and (ii) if this relationship exists, which specific areas on the face are involved? To this end, we propose first to capture densely both the face shape averageness (AVE) and symmetry (SYM) using our Dense Scalar Field (DSF), which denotes the shooting directions of geodesics between facial shapes. Then, we explore such representations by using classical machine learning techniques, the Feature Selection (FS) methods and Random Forest (RF) classification algorithm. Experiments conducted on the FRGCv2 dataset show that a significant relationship exists between gender and facial averageness/symmetry when achieving a classification rate of 93.7% on the 466 earliest scans of subjects (mainly neutral) and 92.4% on the whole FRGCv2 dataset (including facial expressions). HighlightsNew Dense Scalar Fields grounding on Riemannian Geometry for 3D facial shape analysis.New averageness and symmetry descriptors for gender classification.Combining averageness and symmetry for better gender classification.Competitive classification results with state-of-the-art.


Image and Vision Computing | 2014

3D human motion analysis framework for shape similarity and retrieval

Rim Slama; Hazem Wannous; Mohamed Daoudi

3D shape similarity from video is a challenging problem lying at the heart of many primary research areas in computer graphics and computer vision applications. In this paper, we address within a new framework the problem of 3D shape representation and shape similarity in human video sequences. Our shape representation is formulated using extremal human curve (EHC) descriptor extracted from the body surface. It allows taking benefits from Riemannian geometry in the open curve shape space and therefore computing statistics on it. It also allows subject pose comparison regardless of geometrical transformations and elastic surface change. Shape similarity is performed by an efficient method which takes advantage of a compact EHC representation in open curve shape space and an elastic distance measure. Thanks to these main assets, several important exploitations of the human action analysis are performed: shape similarity computation, video sequence comparison, video segmentation, video clustering, summarization and motion retrieval. Experiments on both synthetic and real 3D human video sequences show that our approach provides an accurate static and temporal shape similarity for pose retrieval in video, compared with the state-of-the-art approaches. Moreover, local 3D video retrieval is performed using motion segmentation and dynamic time warping (DTW) algorithm in the feature vector space. The obtained results are promising and show the potential of this approach.


acm multimedia | 2010

Local visual patch for 3d shape retrieval

Hedi Tabia; Mohamed Daoudi; Jean Philippe Vandeborre; Olivier Colot

We present a novel method for 3D-object retrieval using Bag of Feature (BoF) approaches [8]. The method starts by selecting and then describing a set of points from the 3D-object. The proposed descriptor is an indexed collection of closed curves in R3 on the 3D-surface. Such descriptor has the advantage of being invariant to different transformations that a shape can undergo. Based on vector quantization, we cluster those descriptors to form a shape vocabulary. Then, each point selected in the object is associated to a cluster (word) in that vocabulary. Finally, a BoF histogram counting the occurrences of every word is computed. In order to assess our method, we used shapes from the TOSCA and Sumner datasets. The results clearly demonstrate that the method is robust to many kind of transformations and produces higher precision compared with some state-of-the-art methods.


ieee international conference on automatic face gesture recognition | 2015

Human-object interaction recognition by learning the distances between the object and the skeleton joints

Meng Meng; Hassen Drira; Mohamed Daoudi; Jacques Boonaert

In this paper we present a fully automatic approach for human-object interaction recognition from depth sensors. Towards that goal, we extract relevant frame-level features such as inter-joint distances and joint-object distances that are suitable for real time action recognition. These features are insensitive to position and pose variation. Experiments conducted on ORGBD dataset following state-of-the-art settings show the effectiveness of the proposed approach.

Collaboration


Dive into the Mohamed Daoudi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hassen Drira

Institut Mines-Télécom

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lahoucine Ballihi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hazem Wannous

Laboratoire d'Informatique Fondamentale de Lille

View shared research outputs
Top Co-Authors

Avatar

Julien Tierny

Laboratoire d'Informatique Fondamentale de Lille

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge