Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zara Ambadar is active.

Publication


Featured researches published by Zara Ambadar.


computer vision and pattern recognition | 2010

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

Patrick Lucey; Jeffrey F. Cohn; Takeo Kanade; Jason M. Saragih; Zara Ambadar; Iain A. Matthews

In 2000, the Cohn-Kanade (CK) database was released for the purpose of promoting research into automatically detecting individual facial expressions. Since then, the CK database has become one of the most widely used test-beds for algorithm development and evaluation. During this period, three limitations have become apparent: 1) While AU codes are well validated, emotion labels are not, as they refer to what was requested rather than what was actually performed, 2) The lack of a common performance metric against which to evaluate new algorithms, and 3) Standard protocols for common databases have not emerged. As a consequence, the CK database has been used for both AU and emotion detection (even though labels for the latter have not been validated), comparison with benchmark algorithms is missing, and use of random subsets of the original database makes meta-analyses difficult. To address these and other concerns, we present the Extended Cohn-Kanade (CK+) database. The number of sequences is increased by 22% and the number of subjects by 27%. The target expression for each sequence is fully FACS coded and emotion labels have been revised and validated. In addition to this, non-posed sequences for several types of smiles and their associated metadata have been added. We present baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data. The emotion and AU labels, along with the extended image data and tracked landmarks will be made available July 2010.


Psychological Science | 2005

Deciphering the Enigmatic Face The Importance of Facial Dynamics in Interpreting Subtle Facial Expressions

Zara Ambadar; Jonathan W. Schooler; Jeffrey F. Cohn

Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.


Image and Vision Computing | 2009

The painful face - Pain expression recognition using active appearance models

Ahmed Bilal Ashraf; Simon Lucey; Jeffrey F. Cohn; Tsuhan Chen; Zara Ambadar; Kenneth M. Prkachin; Patricia Solomon

Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?


international conference on multimodal interfaces | 2007

The painful face: pain expression recognition using active appearance models

Ahmed Bilal Ashraf; Simon Lucey; Jeffrey F. Cohn; Tsuhan Chen; Zara Ambadar; Kenneth M. Prkachin; Patty Solomon; Barry-John Theobald

Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or not even possible, as in young children or the severely ill. Behavioral scientists have identified reliable and valid facial indicators of pain. Until now they required manual measurement by highly skilled observers. We developed an approach that automatically recognizes acute pain. Adult patients with rotator cuff injury were video-recorded while a physiotherapist manipulated their affected and unaffected shoulder. Skilled observers rated pain expression from the video on a 5-point Likert-type scale. From these ratings, sequences were categorized as no-pain (rating of 0), pain (rating of 3, 4, or 5), and indeterminate (rating of 1 or 2). We explored machine learning approaches for pain-no pain classification. Active Appearance Models (AAM) were used to decouple shape and appearance parameters from the digitized face images. Support vector machines (SVM) were used with several representations from the AAM. Using a leave-one-out procedure, we achieved an equal error rate of 19% (hit rate = 81%) using canonical appearance and shape features. These findings suggest the feasibility of automatic pain detection from video.


international conference on pattern recognition | 2002

Automatic recognition of eye blinking in spontaneously occurring behavior

Tsuyoshi Moriyama; Takeo Kanade; Jeffrey F. Cohn; Jing Xiao; Zara Ambadar; Jiang Gao; Hiroki Imamura

Previous researchin automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to automated action unit recognition. In analysis of blinks, the system achieved 98% accuracy.


systems, man and cybernetics | 2004

Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior

Jeffrey F. Cohn; Lawrence Ian Reed; Zara Ambadar; Jing Xiao; Tsuyoshi Moriyama

Previous efforts in automatic facial expression recognition have been limited to posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). The CMU/Pitt automated facial image analysis system (AFA) accommodates varied pose, moderate out-of-plane head motion, and occlusion. AFA was tested in video of two-person interviews originally collected to answer substantive questions in psychology, and represent a substantial challenge to automatic recognition of facial expression. This report focuses on two action units, brow raising and brow lowering because of their importance to emotion expression and paralinguistic communication. For two-state recognition, AFA achieved 89% accuracy. For three-state recognition (brow raising, brow lowering, and no brow action), accuracy was 76%. Brow and head motion were temporally coordinated. These findings demonstrate the feasibility of action unit recognition in spontaneous facial behavior.


ieee international conference on automatic face gesture recognition | 2004

Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles

Jeffrey F. Cohn; Lawrence Ian Reed; Tsuyoshi Moriyama; Jing Xiao; Karen L. Schmidt; Zara Ambadar

Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed an automatic facial expression analysis system that quantifies the timing of facial actions as well as head and eye motion during spontaneous facial expression. To assess coherence among these modalities, we recorded and analyzed spontaneous smiles in 62 young women of varied ethnicity ranging in age from 18 to 35 years. Spontaneous smiles occurred following directed facial action tasks, a situation likely to elicit spontaneous smiles of embarrassment. Smiles (AU 12) were manually FACS coded by certified FACS coders. The 3D head motion was recovered using a cylindrical head model. The motion vectors for lip-corner displacement were measured using feature-point tracking. The eye closure and the horizontal and vertical eye motion (from which to infer direction of gaze or visual regard) were measured by the generative model fitting approach. The mean correlation within subjects between lip-corner displacement, head motion, and eye motion ranged from +/0.36 to 0.50, which suggests moderate coherence among these features. Lip-corner displacement and head pitch were negatively correlated, as predicted for smiles of embarrassment. These findings are consistent with recent research in psychology suggesting that facial actions are embedded within coordinated motor structures. They suggest that the direction of correlation among features may discriminate between facial actions with similar morphology but different communicative meaning, inform automatic facial expression recognition, and provide normative data for animating computer avatars.


international conference on computer vision | 2007

Temporal Segmentation of Facial Behavior

F. De la Torre; J. Campoy; Zara Ambadar; J.F. Conn

Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world settings is an important, unsolved, and relatively unexplored problem in facial image analysis. Several issues contribute to the challenge of this task. These include non-frontal pose, moderate to large out-of-plane head motion, large variability in the temporal scale of facial gestures, and the exponential nature of possible facial action combinations. To address these challenges, we propose a two-step approach to temporally segment facial behavior. The first step uses spectral graph techniques to cluster shape and appearance features invariant to some geometric transformations. The second step groups the clusters into temporally coherent facial gestures. We evaluated this method in facial behavior recorded during face-to- face interactions. The video data were originally collected to answer substantive questions in psychology without concern for algorithm development. The method achieved moderate convergent validity with manual FACS (Facial Action Coding System) annotation. Further, when used to preprocess video for manual FACS annotation, the method significantly improves productivity, thus addressing the need for ground-truth data for facial image analysis. Moreover, we were also able to detect unusual facial behavior.


Psychological Science | 2007

The Reality of Recovered Memories: Corroborating Continuous and Discontinuous Memories of Childhood Sexual Abuse

Elke Geraerts; Jonathan W. Schooler; Harald Merckelbach; Marko Jelicic; Beatrijs J. A. Hauer; Zara Ambadar

Although controversy surrounds the relative authenticity of discontinuous versus continuous memories of childhood sexual abuse (CSA), little is known about whether such memories differ in their likelihood of corroborative evidence. Individuals reporting CSA memories were interviewed, and two independent raters attempted to find corroborative information for the allegations. Continuous CSA memories and discontinuous memories that were unexpectedly recalled outside therapy were more likely to be corroborated than anticipated discontinuous memories recovered in therapy. Evidence that suggestion during therapy possibly mediates these differences comes from the additional finding that individuals who recalled the memories outside therapy were markedly more surprised at the existence of their memories than were individuals who initially recalled the memories in therapy. These results indicate that discontinuous CSA memories spontaneously retrieved outside of therapy may be accurate, while implicating expectations arising from suggestions during therapy in producing false CSA memories.


Emotion | 2005

Body sensations associated with emotions in Rarámuri indians, rural Javanese, and three student samples

Seger M. Breugelmans; Ype H. Poortinga; Zara Ambadar; Bernadette Setiadi; Jesús B Vaca; Priyo Widiyanto; Pierre Philippot

Cultural variations in the associations of 12 body sensations with 7 emotions were studied in 2 rural samples from northern Mexico (n = 61) and Java, Indonesia (n = 99), with low exposure to Western influences and in 3 university student samples from Belgium (n = 75), Indonesia (n = 85), and Mexico (n = 123). Both parametric and nonparametric analyses suggest that findings from previous studies with only student samples (K. R. Scherer & H. G. Wallbott, 1994) were generalizable to the 2 rural samples. Some notable cultural deviations from common profiles were also identified. Implications of the findings for explanations of body sensations experienced with emotions and the cross-cultural study of emotions are discussed.

Collaboration


Dive into the Zara Ambadar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Xiao

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Simon Lucey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge