Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey F. Cohn is active.

Publication


Featured researches published by Jeffrey F. Cohn.


ieee international conference on automatic face and gesture recognition | 2000

Comprehensive database for facial expression analysis

Takeo Kanade; Jeffrey F. Cohn; Yingli Tian

Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Recognizing action units for facial expression analysis

Yingli Tian; Takeo Kanade; Jeffrey F. Cohn

Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.


computer vision and pattern recognition | 2010

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

Patrick Lucey; Jeffrey F. Cohn; Takeo Kanade; Jason M. Saragih; Zara Ambadar; Iain A. Matthews

In 2000, the Cohn-Kanade (CK) database was released for the purpose of promoting research into automatically detecting individual facial expressions. Since then, the CK database has become one of the most widely used test-beds for algorithm development and evaluation. During this period, three limitations have become apparent: 1) While AU codes are well validated, emotion labels are not, as they refer to what was requested rather than what was actually performed, 2) The lack of a common performance metric against which to evaluate new algorithms, and 3) Standard protocols for common databases have not emerged. As a consequence, the CK database has been used for both AU and emotion detection (even though labels for the latter have not been validated), comparison with benchmark algorithms is missing, and use of random subsets of the original database makes meta-analyses difficult. To address these and other concerns, we present the Extended Cohn-Kanade (CK+) database. The number of sequences is increased by 22% and the number of subjects by 27%. The target expression for each sequence is fully FACS coded and emotion labels have been revised and validated. In addition to this, non-posed sequences for several types of smiles and their associated metadata have been added. We present baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data. The emotion and AU labels, along with the extended image data and tracked landmarks will be made available July 2010.


Image and Vision Computing | 2010

Multi-PIE

Ralph Gross; Iain A. Matthews; Jeffrey F. Cohn; Takeo Kanade; Simon Baker

A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.


International Journal of Computer Vision | 2011

Deformable Model Fitting by Regularized Landmark Mean-Shift

Jason M. Saragih; Simon Lucey; Jeffrey F. Cohn

Deformable model fitting has been actively pursued in the computer vision community for over a decade. As a result, numerous approaches have been proposed with varying degrees of success. A class of approaches that has shown substantial promise is one that makes independent predictions regarding locations of the model’s landmarks, which are combined by enforcing a prior over their joint motion. A common theme in innovations to this approach is the replacement of the distribution of probable landmark locations, obtained from each local detector, with simpler parametric forms. In this work, a principled optimization strategy is proposed where nonparametric representations of these likelihoods are maximized within a hierarchy of smoothed estimates. The resulting update equations are reminiscent of mean-shift over the landmarks but with regularization imposed through a global prior over their joint motion. Extensions to handle partial occlusions and reduce computational complexity are also presented. Through numerical experiments, this approach is shown to outperform some common existing methods on the task of generic face fitting.


Developmental Psychology | 1990

Face-to-Face Interactions of Postpartum Depressed and Nondepressed Mother-Infant Pairs at 2 Months

Jeffrey F. Cohn; Susan B. Campbell; Reinaldo Matias; Joyce Hopkins

Depressions influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In Ss homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors.


Child Development | 1983

Three-Month-Old Infants' Reaction to Simulated Maternal Depression.

Jeffrey F. Cohn; Edward Z. Tronick

To investigate the nature of the young infants social competence, the effect of depressed maternal expression during face-to-face interaction was examined using an experimental analogue of maternal depression. Subjects were 12 female and 12 male infants, ages 96-110 days, and their mothers. 2 counter-balanced experimental treatments consisted of 3 min of normal maternal interaction and 3 min of stimulated depressed interaction. A control treatment consisted of 2 3-min epochs of normal maternal interaction. Interactions were videotaped and infant behavior described on a 5-sec time base that maintained order of occurrence. Infants in the depressed condition structured their behavior differently and were more negative than infants in the normal condition. Infants in the depressed condition produced higher proportions of protest, wary, and brief positive. Infants in the depressed condition cycled among protest, wary, and look away. Infants in the normal condition cycled among monitor, brief positive, and play. In addition, differences in negativity were likely to continue briefly after mothers switched from depressed to normal interaction. The data indicate that infants have a specific, appropriate, negative reaction to simulated depression in their mothers. These results question formulations based on alternate hypotheses and suggest that the infant has communicative intent in its interactions.


Developmental Psychology | 1995

Depression in first-time mothers: Mother-infant interaction and depression chronicity.

Susan B. Campbell; Jeffrey F. Cohn; Teri Meyers

Married, middle-class women who met diagnostic criteria for depression and a comparable group of nondepressed women were videotaped interacting with their infants at home at 2, 4, and 6 months. When depression was defined in terms of 2-month diagnosis, there were no differences between depressed and comparison mothers or babies in either positive interaction during feeding, face-to-face interaction, or toy play. However, women whose depressions lasted through 6 months were less positive with their infants across these 3 contexts than women whose depressions were more short-lived, and their babies were less positive during face-to-face interaction. These data highlight the need to distinguish between transient and protracted depression effects on the mother-infant relationship and infant outcome


Psychological Science | 2005

Deciphering the Enigmatic Face The Importance of Facial Dynamics in Interpreting Subtle Facial Expressions

Zara Ambadar; Jonathan W. Schooler; Jeffrey F. Cohn

Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.


international conference on computer vision | 2009

Face alignment through subspace constrained mean-shifts

Jason M. Saragih; Simon Lucey; Jeffrey F. Cohn

Deformable model fitting has been actively pursued in the computer vision community for over a decade. As a result, numerous approaches have been proposed with varying degrees of success. A class of approaches that has shown substantial promise is one that makes independent predictions regarding locations of the models landmarks, which are combined by enforcing a prior over their joint motion. A common theme in innovations to this approach is the replacement of the distribution of probable landmark locations, obtained from each local detector, with simpler parametric forms. This simplification substitutes the true objective with a smoothed version of itself, reducing sensitivity to local minima and outlying detections. In this work, a principled optimization strategy is proposed where a nonparametric representation of the landmark distributions is maximized within a hierarchy of smoothed estimates. The resulting update equations are reminiscent of mean-shift but with a subspace constraint placed on the shapes variability. This approach is shown to outperform other existing methods on the task of generic face fitting.

Collaboration


Dive into the Jeffrey F. Cohn's collaboration.

Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Simon Lucey

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zakia Hammal

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

László A. Jeni

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Zara Ambadar

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge