Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saida Bouakaz is active.

Publication


Featured researches published by Saida Bouakaz.


Pattern Recognition Letters | 2013

Framework for reliable, real-time facial expression recognition for low resolution images

Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saida Bouakaz

Automatic recognition of facial expressions is a challenging problem specially for low spatial resolution facial images. It has many potential applications in human-computer interactions, social robots, deceit detection, interactive video and behavior monitoring. In this study we present a novel framework that can recognize facial expressions very efficiently and with high accuracy even for very low resolution facial images. The proposed framework is memory and time efficient as it extracts texture features in a pyramidal fashion only from the perceptual salient regions of the face. We tested the framework on different databases, which includes Cohn-Kanade (CK+) posed facial expression database, spontaneous expressions of MMI facial expression database and FG-NET facial expressions and emotions database (FEED) and obtained very good results. Moreover, our proposed framework exceeds state-of-the-art methods for expression recognition on low resolution images.


international conference on image processing | 2012

Human vision inspired framework for facial expressions recognition

Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saida Bouakaz

We present a novel human vision inspired framework that can recognize facial expressions very efficiently and accurately. We propose to computationally process small, salient region of the face to extract features as it happens in human vision. To determine which facial region(s) is perceptually salient for a particular expression, we conducted a psycho-visual experimental study with an eye-tracker. A novel feature space conducive for recognition task is proposed, which is created by extracting Pyramid Histogram of Orientation Gradients features only from the salient facial regions. By processing only salient regions, proposed framework achieved two goals: (a) reduction in computational time for feature extraction (b) reduction in feature vector dimensionality. The proposed framework achieved automatic expression recognition accuracy of 95.3% on extended Cohn-Kanade (CK+) facial expression database for six universal facial expressions.


international conference on computer vision | 2007

Real-Time Marker-free Motion Capture from multiple cameras

Brice Michoud; Erwan Guillou; Héctor M. Briceño; Saida Bouakaz

We present a fully-automated method for real-time and marker-free 3D human motion capture. The system computes the 3D shape of the person filmed from a synchronized camera set. We obtain a robust and real-time system by using both a fast 3D shape analysis and a skin segmentation algorithm for human tracking. A skeleton-based approach facilitates the shape analysis. We are able to track fast and complex human motion in very difficult cases, like self-occlusion. Results on long video sequences with rapid and complex movements, demonstrate our approach robustness.


international conference on multimedia and expo | 2013

Pain detection through shape and appearance features

Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saida Bouakaz

In this paper we are proposing a novel computer vision system that can recognize expression of pain in videos by analyzing facial features. Usually pain is reported and recorded manually and thus carry lot of subjectivity. Manual monitoring of pain makes difficult for the medical practitioners to respond quickly in critical situations. Thus, it is desirable to design such a system that can automate this task. With our proposed model pain monitoring can be done in real-time without any human intervention. We propose to extract shape information using pyramid histogram of orientation gradients (PHOG) and appearance information using pyramid local binary pattern (PLBP) in order to get discriminative representation of face. We tested our proposed model on UNBC-McMaster Shoulder Pain Expression Archive Database and recorded results that exceeds state-of-the-art.


workshop on human motion | 2007

Real-time and markerless 3D human motion capture using multiple views

Brice Michoud; Erwan Guillou; Saida Bouakaz

We present a fully automated system for real-time markerless 3D human motion capture. Our approach, based on fast algorithms, uses simple techniques and requires low-cost devices. Using input from multiple calibrated webcams, an extended Shape-From-Silhouette algorithm reconstructs the person in real-time. Fast 3D shape and 3D skin parts analysis provide a robust and real-time system for human full-body tracking. Animation skeleton and simple morphological constraints make easier the motion capture process. Thanks to fast and simple algorithms and low-cost cameras, our system is perfectly apt for home entertainment device. Results on real video sequences with complicated motions demonstrate the robustness of the approach.


computer vision and pattern recognition | 2012

Exploring human visual system: Study to aid the development of automatic facial expression recognition framework

Rizwan Ahmed Khan; Alexandre Meyer; Hubert Konik; Saida Bouakaz

This paper focus on understanding human visual system when it decodes or recognizes facial expressions. Results presented can be exploited by the computer vision research community for the development of robust descriptor based on human visual system for facial expressions recognition. We have conducted psycho-visual experimental study to find which facial region is perceptually more attractive or salient for a particular expression. Eye movements of 15 observers were recorded with an eye-tracker in free viewing conditions as they watch a collection of 54 videos selected from Cohn-Kanade facial expression database, showing six universal facial expressions. The results of the study shows that for some facial expressions only one facial region is perceptually more attractive than others. Other cases shows the attractiveness of two to three facial regions. This paper also proposes a novel framework for automatic recognition of expressions which is based on psycho-visual study.


2011 Workshop on Digital Media and Digital Content Management | 2011

3D Hand Model Animation with a New Data-Driven Method

Ouissem Ben Henia; Saida Bouakaz

This paper presents a data-driven method to track hand gesture and animate 3D hand model. The proposed method uses a new generation of active camera based on time of flight principle (The Swissranger4000). To achieve the tracking, the presented method exploits a database of hand gestures represented as 3D point clouds acquired from the Swissranger4000 video camera. In order to track a large number of hand poses with a database as small as possible we classify the hand gestures using a Principal Component Analysis (PCA). Applied to each point cloud, the PCA produces a new representation of the hand pose independent of the position and orientation in the 3D space. To explore fast and efficiently the database we use a comparison function based on 3D distance transform. Experimental results on real data demonstrate the potential of the method.


Pattern Recognition Letters | 2013

A real-time system for motion retrieval and interpretation

Mathieu Barnachon; Saida Bouakaz; Boubakeur Boufama; Erwan Guillou

This paper proposes a new examplar-based method for real-time human motion recognition using Motion Capture (MoCap) data. We have formalized streamed recognizable actions, coming from an online MoCap engine, into a motion graph that is similar to an animation motion graph. This graph is used as an automaton to recognize known actions as well as to add new ones. We have defined and used a spatio-temporal metric for similarity measurements to achieve more accurate feedbacks on classification. The proposed method has the advantage of being linear and incremental, making the recognition process very fast and the addition of a new action straightforward. Furthermore, actions can be recognized with a score even before they are fully completed. Thanks to the use of a skeleton-centric coordinate system, our recognition method has become view-invariant. We have successfully tested our action recognition method on both synthetic and real data. We have also compared our results with four state-of-the-art methods using three well known datasets for human action recognition. In particular, the comparisons have clearly shown the advantage of our method through better recognition rates.


Computer Animation and Virtual Worlds | 2013

Procedural locomotion of multilegged characters in dynamic environments

Ahmad Abdul Karim; Thibaut Gaudin; Alexandre Meyer; Axel Buendia; Saida Bouakaz

We present a fully procedural method capable of generating in real time a wide range of locomotion for multilegged characters in a dynamic environment, without using any motion data. The system consists of several independent blocks: a Character Controller, a Gait/Tempo Manager, a three‐dimensional (3D) Path Constructor, and a Footprints Planner. The four modules work cooperatively to calculate in real time the footprints and the 3D trajectories of the feet and the pelvis. Our system can animate dozens of creatures using dedicated level of details techniques and is totally controllable allowing the user to design a multitude of locomotion styles through a user‐friendly interface. The result is a complete lower body animation that is sufficient for most of the chosen multilegged characters: arachnids, insects, imaginary n‐legged robots, and so on. Copyright


digital information and communication technology and its applications | 2011

Automatic Adaptive Facial Feature Extraction Using CDF Analysis

Sushil Kumar Paul; Saida Bouakaz; Mohammad Shorif Uddin

This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on histogram representing CDF approach. At first, the method adopts the Viola-Jones face detector to detect the location of face and the four relevant regions such as right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its CDF value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search and a contour algorithm are applied to extract our desired corner points automatically. The method was tested on a large BioID face database and the experimental results have achieved average success rates of 95.56%.

Collaboration


Dive into the Saida Bouakaz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michel Vacher

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge