Mohamed Dahmane
Université de Montréal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Dahmane.
Face and Gesture 2011 | 2011
Mohamed Dahmane; Jean Meunier
Automatic facial expression analysis is the most commonly studied aspect of behavior understanding and human-computer interface. The main difficulty with facial emotion recognition system is to implement general expression models. The same facial expression may vary differently across humans; this can be true even for the same person when the expression is displayed in different contexts. These factors present a significant challenge for the recognition task. The method we applied, which is reminiscent of the “baseline method”, utilizes dynamic dense appearance descriptors and statistical machine learning techniques. Histograms of oriented gradients (HoG) are used to extract the appearance features by accumulating the gradient magnitudes for a set of orientations in 1-D histograms defined over a size-adaptive dense grid, and Support Vector Machines with Radial Basis Function kernels are the base learners of emotions. The overall classification performance of the emotion detection reached 70% which is better than the 56% accuracy achieved by the “baseline method” presented by the challenge organizers.
canadian conference on computer and robot vision | 2005
Mohamed Dahmane; Jean Meunier
In this paper, we present an approach for video surveillance involving (a) moving object detection, (b) tracking and (c) normal/abnormal event recognition. The detection step uses an adaptive background subtraction technique with a shadow elimination model based on the color constancy principle. The target tracking involves a direct and inverse matrix matching process. The novelty of the paper lies mainly in the recognition stage, where we consider local motion properties (flow vector), and more global ones expressed by elliptic Fourier descriptors. From these temporal trajectory characterizations, two Kohonen maps allow to distinguish normal behavior from abnormal or suspicious ones. The classification results show a 94.6 % correct recognition rate with video sequences taken by a low cost webcam. Finally, this algorithm can be fully implemented in real-time.
affective computing and intelligent interaction | 2011
Mohamed Dahmane; Jean Meunier
Automatic facial expression analysis systems try to build a mapping between the continuous emotion space and a set of discrete expression categories (e.g. happiness, sadness). In this paper, we present a method to recognize emotions in terms of latent dimensions (e.g. arousal, valence, power). The method we applied uses Gabor energy texture descriptors to model the facial appearance deformations, and a multiclass SVM as base learner of emotions. To deal with more naturalistic behavior, the SEMAINE database of naturalistic dialogues was used.
IEEE Transactions on Multimedia | 2014
Mohamed Dahmane; Jean Meunier
Automatic facial expression analysis systems are aiming towards the application of computer vision techniques in human computer interaction, emotion analysis, and even medical care via a space mapping between the continuous emotion and a set of discrete expression categories. The main difficulty with these systems is the inherent problem of facial alignment due to person-specific appearance. Beside the facial representation problem, the same displayed facial expression may vary differently across humans; this can be true even for the same person in different contexts. To cope with these variable factors, we introduce the concept of prototype-based model as anchor modeling through a SIFT-flow registration. A set of prototype facial expression models is generated as a reference space of emotions on which face images are projected to generate a set of registered faces. To characterize the facial expression appearance, oriented gradients are processed on each registered image. We obtained the best results 87% with the person-independent evaluation strategy on JAFFE dataset (7-class expression recognition problem), and 83% on the complex setting of the GEMEP-FERA database (5-class problem).
international geoscience and remote sensing symposium | 2016
Mohamed Dahmane; Samuel Foucher; Mario Beaulieu; F. Riendeau; Yacine Bouroubi; Mathieu Benoit
Extracting and identifying objects in very high resolution imagery has been a popular research topic in remote sensing. Since the beginning of this decade, deep learning techniques have revolutionized computer vision providing significant performance gains compared to traditional “shallow” techniques in various challenging vision problems. The training of deep neural networks usually requires very large training datasets. The advantage of using deep features is to exploit already trained Convolutional Neural Networks (CNN) in order to produce high level features without the burden of having to train a CNN from scratch. In this paper, we are investigating the use of deep features for the detection of small objects (cars and individual trees) in high resolution Pleiades imagery. Preliminary results show good detection performance and are very encouraging for future applications.
information sciences, signal processing and their applications | 2012
Mohamed Dahmane; Jean Meunier
Within the affective computing research field, researchers are still facing a big challenge to establish techniques to recognize human emotions from video sequences, since human affective behavior is subtle and multimodal. Automated systems try to build a mapping between the continuous emotion space and a set of discrete expression categories (e.g. happiness, sadness). Since facial expression may vary differently across humans, designing a general expression model is the most challenging problem.
advanced concepts for intelligent vision systems | 2008
Mohamed Dahmane; Jean Meunier
This work presents a technique for automatic personalized facial features localization and tracking. The approach uses a set of subgraphs corresponding to the face deformable parts which are attached to a main subgraph with nodes consisting of more stable features, some of these nodes represent the anchor points of the more deformable subgraphs. At the node level, accurate positions are obtained by employing a Gabor phase---based disparity estimation technique. We used a modified formulation in which we have introduced a conditional disparity estimation procedure and a confidence measure as a similarity function that includes a phase difference term. A collection of trained graphs that were captured from different face deformations, are employed to correct the subgraph nodes tracking errors. Experimental results show that the facial feature points can be tracked with sufficient precision by establishing an effective self---correcting mechanism.
international conference on image analysis and recognition | 2011
Mohamed Dahmane; Jean Meunier
Automatic facial expression analysis is the most commonly studied aspect of behavior understanding and human-computer interface. Most facial expression recognition systems are implemented with general expression models. However, the same facial expression may vary differently across humans, this can be true even for the same person when the expression is displayed in different contexts. These factors present a significant challenge for recognition. To cope with this problem, we present in this paper a personalized facial action recognition framework that we wish to use in a clinical setting with familiar faces; in this case a high accuracy level is required. The graph fitting method that we are using offers a constrained tracking approach on both shape (using procrustes transformation) and appearance (using weighted Gabor wavelet similarity measure). The tracking process is based on a modified Gabor-phase based disparity estimation technique. Experimental results show that the facial feature points can be tracked with sufficient precision leading to a high facial expression recognition performance.
international conference on computer vision | 2011
Mohamed Dahmane; Jean Meunier
Visual object recognition is a hard computer vision problem. In this paper, we investigate the issue of the representative features for object detection and propose a novel discriminative feature sets that are extracted by accumulating magnitudes for a set of specific Gabor wave vectors in 1-D histogram defined over a uniformly-spaced grid. A case study is presented using radial-basis-function kernel SVM as base learners of human head poses. In which, we point out the effectiveness of the proposed descriptors, relative to related approaches. The average performance reached 65% for yaw and 73.3% for pitch, which are better than the (40.7% and 59.0%) accuracy achieved by calibrated people. A substantial performance gain as higher as (1.18% for yaw and 1.27% for pitch) is achievable with the proposed feature sets.
canadian conference on computer and robot vision | 2007
Mohamed Dahmane; Jean Meunier
The aim of this study is to elaborate and validate a methodology to automatically assess head orientation with respect to a camera in a video sequence. The proposed method uses relatively stable facial features (upper points of the eyebrows, upper nasolabial-furrow corners and nasal root) that have symmetric properties to recover the face slant and tilt angles. These fiducial points are characterized by a bank of steerable filters. Using the frequency domain, we present an elegant formulation to linearly decompose a Gaussian steerable filter into a set of x, y separable basis Gaussian kernels. A practical scheme to estimate the position of the occasionally occluded nasolabial-furrow facial feature is also proposed. Results show that head motion can be estimated with sufficient precision to obtain the gaze direction without camera calibration or any other particular settings are required for this purpose.