Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey M. Girard is active.

Publication


Featured researches published by Jeffrey M. Girard.


ieee international conference on automatic face gesture recognition | 2015

FERA 2015 - second Facial Expression Recognition and Analysis challenge

Michel F. Valstar; Timur R. Almaev; Jeffrey M. Girard; Marc Mehu; Lijun Yin; Maja Pantic; Jeffrey F. Cohn

Despite efforts towards evaluation standards in facial expression analysis (e.g. FERA 2011), there is a need for up-to-date standardised evaluation procedures, focusing in particular on current challenges in the field. One of the challenges that is actively being addressed is the automatic estimation of expression intensities. To continue to provide a standardisation platform and to help the field progress beyond its current limitations, the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015) will challenge participants to estimate FACS Action Unit (AU) intensity as well as AU occurrence on a common benchmark dataset with reliable manual annotations. Evaluation will be done using a clear and well-defined protocol. In this paper we present the second such challenge in automatic recognition of facial expressions, to be held in conjunction with the 11 IEEE conference on Face and Gesture Recognition, May 2015, in Ljubljana, Slovenia. Three sub-challenges are defined: the detection of AU occurrence, the estimation of AU intensity for pre-segmented data, and fully automatic AU intensity estimation. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for the three sub-challenges.


ieee international conference on automatic face gesture recognition | 2013

Continuous AU intensity estimation using localized, sparse facial feature space

László A. Jeni; Jeffrey M. Girard; Jeffrey F. Cohn; Fernando De la Torre

Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.


Image and Vision Computing | 2014

Nonverbal Social Withdrawal in Depression: Evidence from manual and automatic analyses

Jeffrey M. Girard; Jeffrey F. Cohn; Mohammad H. Mahoor; S. Mohammad Mavadati; Zakia Hammal; Dean P. Rosenwald

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.


ieee international conference on automatic face gesture recognition | 2013

Social risk and depression: Evidence from manual and automatic facial expression analysis

Jeffrey M. Girard; Jeffrey F. Cohn; Mohammad H. Mahoor; Seyed Mohammad Mavadati; Dean P. Rosenwald

Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.


Behavior Research Methods | 2015

Spontaneous facial expression in unscripted social interactions can be measured automatically

Jeffrey M. Girard; Jeffrey F. Cohn; László A. Jeni; Michael A. Sayette; Fernando De la Torre

Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew’s correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.


computer vision and pattern recognition | 2016

Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis

Zheng Zhang; Jeffrey M. Girard; Yue Wu; Xing Zhang; Peng Liu; Umur A. Ciftci; Shaun J. Canavan; Michael Reale; Andrew Horowitz; Huiyuan Yang; Jeffrey F. Cohn; Qiang Ji; Lijun Yin

Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.


Pattern Recognition Letters | 2015

Estimating smile intensity

Jeffrey M. Girard; Jeffrey F. Cohn; Fernando De la Torre

Both the occurrence and intensity of facial expressions are critical to what the face reveals. While much progress has been made towards the automatic detection of facial expression occurrence, controversy exists about how to estimate expression intensity. The most straight-forward approach is to train multiclass or regression models using intensity ground truth. However, collecting intensity ground truth is even more time consuming and expensive than collecting binary ground truth. As a shortcut, some researchers have proposed using the decision values of binary-trained maximum margin classifiers as a proxy for expression intensity. We provide empirical evidence that this heuristic is flawed in practice as well as in theory. Unfortunately, there are no shortcuts when it comes to estimating smile intensity: researchers must take the time to collect and train on intensity ground truth. However, if they do so, high reliability with expert human coders can be achieved. Intensity-trained multiclass and regression models outperformed binary-trained classifier decision values on smile intensity estimation across multiple databases and methods for feature extraction and dimensionality reduction. Multiclass models even outperformed binary-trained classifiers on smile occurrence detection.


Journal of open research software | 2014

CARMA: Software for Continuous Affect Rating and Media Annotation

Jeffrey M. Girard

CARMA is a media annotation program that collects continuous ratings while displaying audio and video files. It is designed to be highly user-friendly and easily customizable. Based on Gottman and Levensons affect rating dial, CARMA enables researchers and study participants to provide moment-by-moment ratings of multimedia files using a computer mouse or keyboard. The rating scale can be configured on a number of parameters including the labels for its upper and lower bounds, its numerical range, and its visual representation. Annotations can be displayed alongside the multimedia file and saved for easy import into statistical analysis software. CARMA provides a tool for researchers in affective computing, human-computer interaction, and the social sciences who need to capture the unfolding of subjective experience and observable behavior over time.


ieee international conference on automatic face gesture recognition | 2015

How much training data for facial action unit detection

Jeffrey M. Girard; Jeffrey F. Cohn; László A. Jeni; Simon Lucey; Fernando De la Torre

By systematically varying the number of subjects and the number of frames per subject, we explored the influence of training set size on appearance and shape-based approaches to facial action unit (AU) detection. Digital video and expert coding of spontaneous facial activity from 80 subjects (over 350,000 frames) were used to train and test support vector machine classifiers. Appearance features were shape-normalized SIFT descriptors and shape features were 66 facial landmarks. Ten-fold cross-validation was used in all evaluations. Number of subjects and number of frames per subject differentially affected appearance and shape-based classifiers. For appearance features, which are high-dimensional, increasing the number of training subjects from 8 to 64 incrementally improved performance, regardless of the number of frames taken from each subject (ranging from 450 through 3600). In contrast, for shape features, increases in the number of training subjects and frames were associated with mixed results. In summary, maximal performance was attained using appearance features from large numbers of subjects with as few as 450 frames per subject. These findings suggest that variation in the number of subjects rather than number of frames per subject yields most efficient performance.


ieee international conference on automatic face gesture recognition | 2017

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

Michel F. Valstar; Enrique Sánchez-Lozano; Jeffrey F. Cohn; László A. Jeni; Jeffrey M. Girard; Zheng Zhang; Lijun Yin; Maja Pantic

The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

Collaboration


Dive into the Jeffrey M. Girard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

László A. Jeni

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijun Yin

Binghamton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph E. Beeney

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Lori N. Scott

University of Pittsburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge