Zakia Hammal
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zakia Hammal.
international conference on multimodal interfaces | 2012
Zakia Hammal; Jeffrey F. Cohn
Previous efforts suggest that occurrence of pain can be detected from the face. Can intensity of pain be detected as well? The Prkachin and Solomon Pain Intensity (PSPI) metric was used to classify four levels of pain intensity (none, trace, weak, and strong) in 25 participants with previous shoulder injury (McMaster-UNBC Pain Archive). Participants were recorded while they completed a series of movements of their affected and unaffected shoulders. From the video recordings, canonical normalized appearance of the face (CAPP) was extracted using active appearance modeling. To control for variation in face size, all CAPP were rescaled to 96x96 pixels. CAPP then was passed through a set of Log-Normal filters consisting of 7 frequencies and 15 orientations to extract 9216 features. To detect pain level, 4 support vector machines (SVMs) were separately trained for the automatic measurement of pain intensity on a frame-by-frame level using both 5-folds cross-validation and leave-one-subject-out cross-validation. F1 for each level of pain intensity ranged from 91% to 96% and from 40% to 67% for 5-folds and leave-one-subject-out cross-validation, respectively. Intra-class correlation, which assesses the consistency of continuous pain intensity between manual and automatic PSPI was 0.85 and 0.55 for 5-folds and leave-one-subject-out cross-validation, respectively, which suggests moderate to high consistency. These findings show that pain intensity can be reliably measured from facial expression in participants with orthopedic injury.
Image and Vision Computing | 2014
Jeffrey M. Girard; Jeffrey F. Cohn; Mohammad H. Mahoor; S. Mohammad Mavadati; Zakia Hammal; Dean P. Rosenwald
The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.
International Journal of Approximate Reasoning | 2007
Zakia Hammal; Laurent Couvreur; Alice Caplier; Michèle Rombaut
A method for the classification of facial expressions from the analysis of facial deformations is presented. This classification process is based on the transferable belief model (TBM) framework. Facial expressions are related to the six universal emotions, namely Joy, Surprise, Disgust, Sadness, Anger, Fear, as well as Neutral. The proposed classifier relies on data coming from a contour segmentation technique, which extracts an expression skeleton of facial features (mouth, eyes and eyebrows) and derives simple distance coefficients from every face image of a video sequence. The characteristic distances are fed to a rule-based decision system that relies on the TBM and data fusion in order to assign a facial expression to every face image. In the proposed work, we first demonstrate the feasibility of facial expression classification with simple data (only five facial distances are considered). We also demonstrate the efficiency of TBM for the purpose of emotion classification. The TBM based classifier was compared with a Bayesian classifier working on the same data. Both classifiers were tested on three different databases.
Signal Processing | 2006
Zakia Hammal; Nicolas Eveno; Alice Caplier; Pierre-Yves Coulon
In this paper, we are dealing with the problem of facial features segmentation (mouth, eyes and eyebrows). A specific parametric model is defined for each deformable feature, each model being able to take into account all the possible deformations. In order to initialize each model, some characteristic points are extracted on each image to be processed (for example, eyes corners, mouth corners and brows corners). In order to fit the model with the contours to be extracted, a gradient flow (of luminance or chrominance) through the estimated contour is maximized because at each point of the searched contour, the gradient (of luminance or chrominance) is normal. The definition of a model associated to each feature offers the possibility to introduce a regularization constraint. However, the chosen models are flexible enough to produce realistic contours for the mouth, the eyes and the eyebrows. This facial features segmentation is the first step of a set of multi-media applications.
Pattern Recognition | 2012
Zakia Hammal; Miriam Kunz
The current paper presents an automatic and context sensitive system for the dynamic recognition of pain expression among the six basic facial expressions and neutral on acted and spontaneous sequences. A machine learning approach based on the Transferable Belief Model, successfully used previously to categorize the six basic facial expressions in static images [2,61], is extended in the current paper for the automatic and dynamic recognition of pain expression from video sequences in a hospital context application. The originality of the proposed method is the use of the dynamic information for the recognition of pain expression and the combination of different sensors, permanent facial features behavior, transient features behavior, and the context of the study, using the same fusion model. Experimental results, on 2-alternative forced choices and, for the first time, on 8-alternative forced choices (i.e. pain expression is classified among seven other facial expressions), show good classification rates even in the case of spontaneous pain sequences. The mean classification rates on acted and spontaneous data reach 81.2% and 84.5% for the 2-alternative and 8-alternative forced choices, respectively. Moreover, the system performances compare favorably to the human observer rates (76%), and lead to the same doubt states in the case of blend expressions.
Cognitive Computation | 2015
Alessandro Vinciarelli; Anna Esposito; Elisabeth André; Francesca Bonin; Mohamed Chetouani; Jeffrey F. Cohn; Marco Cristani; Ferdinand Fuhrmann; Elmer Gilmartin; Zakia Hammal; Dirk Heylen; Rene Kaiser; Maria Koutsombogera; Alexandros Potamianos; Steve Renals; Giuseppe Riccardi; Albert Ali Salah
Modelling, analysis and synthesis of behaviour are the subject of major efforts in computing science, especially when it comes to technologies that make sense of human–human and human–machine interactions. This article outlines some of the most important issues that still need to be addressed to ensure substantial progress in the field, namely (1) development and adoption of virtuous data collection and sharing practices, (2) shift in the focus of interest from individuals to dyads and groups, (3) endowment of artificial agents with internal representations of users and context, (4) modelling of cognitive and semantic processes underlying social behaviour and (5) identification of application domains and strategies for moving from laboratory to the real-world products.
international conference on image analysis and processing | 2005
Zakia Hammal; Laurent Couvreur; Alice Caplier; Michèle Rombaut
This paper presents a system for classifying facial expressions based on a data fusion process relying on the Belief Theory (BeT). Four expressions are considered: joy, surprise, disgust as well as neutral. The proposed system is able to take into account intrinsic doubt about emotion in the recognition process and to handle the fact that each person has his/her own maximal intensity of displaying a particular facial expression. To demonstrate the suitability of our approach for facial expression classification, we compare it with two other standard approaches: the Bayesian Theory (BaT) and the Hidden Markov Models (HMM). The three classification systems use characteristic distances measuring the deformations of facial skeletons. These skeletons result from a contour segmentation of facial permanent features (mouth, eyes and eyebrows). The performances of the classification systems are tested on the Hammal-Caplier database [1] and it is shown that the BeT classifier outperforms both the BaT and HMM classifiers for the considered application.
international conference on information fusion | 2005
Zakia Hammal; Alice Caplier; Michèle Rombaut
This paper presents a system of facial expressions classification based on a data fusion process using the belief theory. The considered expressions correspond to the six universal emotions (joy, surprise, disgust, sadness, anger, fear) as well as to the neutral expression. Since some of the six basic emotions are difficult to simulate by non-actor people, the performances of the classification system are evaluated only for four expressions (joy, surprise, disgust, and neutral). The proposed algorithm is based on the analysis of characteristic distances measuring the deformations of facial features, which are computed on skeletons of expression. The skeletons are the result of a contour segmentation process of facial permanent features (mouth, eyes and eyebrows). The considered distances are used to develop an expert system for classification. The performances and the limits of the recognition system and its ability to deal with different databases are highlighted thanks to the analysis of a great number of results on three different databases: the Hammal-Caplier database, the Cohn-Kanade database and the Cottrel database.
international conference on multimodal interfaces | 2014
Stefan Scherer; Zakia Hammal; Ying Yang; Louis-Philippe Morency; Jeffrey F. Cohn
Previous literature suggests that depression impacts vocal timing of both participants and clinical interviewers but is mixed with respect to acoustic features. To investigate further, 57 middle-aged adults (men and women) with Major Depression Disorder and their clinical interviewers (all women) were studied. Participants were interviewed for depression severity on up to four occasions over a 21 week period using the Hamilton Rating Scale for Depression (HRSD), which is a criterion measure for depression severity in clinical trials. Acoustic features were extracted for both participants and interviewers using COVAREP Toolbox. Missing data occurred due to missed appointments, technical problems, or insufficient vocal samples. Data from 36 participants and their interviewers met criteria and were included for analysis to compare between high and low depression severity. Acoustic features for participants varied between men and women as expected, and failed to vary with depression severity for participants. For interviewers, acoustic characteristics strongly varied with severity of the interviewees depression. Accommodation - the tendency of interactants to adapt their communicative behavior to each other - between interviewers and interviewees was inversely related to depression severity. These findings suggest that interviewers modify their acoustic features in response to depression severity, and depression severity strongly impacts interpersonal accommodation.
international conference on multimodal interfaces | 2015
Hamdi Dibeklioglu; Zakia Hammal; Ying Yang; Jeffrey F. Cohn
Current methods for depression assessment depend almost entirely on clinical interview or self-report ratings. Such measures lack systematic and efficient ways of incorporating behavioral observations that are strong indicators of psychological disorder. We compared a clinical interview of depression severity with automatic measurement in 48 participants undergoing treatment for depression. Interviews were obtained at 7-week intervals on up to four occasions. Following standard cut-offs, participants at each session were classified as remitted, intermediate, or depressed. Logistic regression classifiers using leave-one-out validation were compared for facial movement dynamics, head movement dynamics, and vocal prosody individually and in combination. Accuracy (remitted versus depressed) for facial movement dynamics was higher than that for head movement dynamics; and each was substantially higher than that for vocal prosody. Accuracy for all three modalities together reached 88.93 %, exceeding that for any single modality or pair of modalities. These findings suggest that automatic detection of depression from behavioral indicators is feasible and that multimodal measures afford most powerful detection.