Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gwen Littlewort is active.

Publication


Featured researches published by Gwen Littlewort.


computer vision and pattern recognition | 2005

Recognizing facial expression: machine learning and application to spontaneous behavior

Marian Stewart Bartlett; Gwen Littlewort; Mark G. Frank; Claudia Lainscsek; Ian R. Fasel; Javier R. Movellan

We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis. We also explored feature selection techniques, including the use of AdaBoost for feature selection prior to classification by SVM or LDA. Best results were obtained by selecting a subset of Gabor filters using AdaBoost followed by classification with support vector machines. The system operates in real-time, and obtained 93% correct generalization to novel subjects for a 7-way forced choice on the Cohn-Kanade expression dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics. We applied the system to to fully automated recognition of facial actions (FACS). The present system classifies 17 action units, whether they occur singly or in combination with other actions, with a mean accuracy of 94.8%. We present preliminary results for applying this system to spontaneous facial expressions.


computer vision and pattern recognition | 2003

Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.

Marian Stewart Bartlett; Gwen Littlewort; Ian R. Fasel; Javier R. Movellan

Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVMs enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sonys Aibo pet robot, ATRs RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.


Journal of Multimedia | 2006

Automatic recognition of facial actions in spontaneous expressions

Marian Stewart Bartlett; Gwen Littlewort; Mark G. Frank; Claudia Lainscsek; Ian R. Fasel; Javier R. Movellan

Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS). The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.


Face and Gesture 2011 | 2011

The computer expression recognition toolbox (CERT)

Gwen Littlewort; Jacob Whitehill; Tingfan Wu; Ian R. Fasel; Mark G. Frank; Javier R. Movellan; Marian Stewart Bartlett

We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different protoypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+ [1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.


Image and Vision Computing | 2006

Dynamics of facial expression extracted automatically from video

Gwen Littlewort; Marian Stewart Bartlett; Ian R. Fasel; Joshua Susskind; Javier R. Movellan

We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions, including AdaBoost, support vector machines, and linear discriminant analysis. Each video-frame is first scanned in real-time to detect approximately upright-frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing spatial frequency ranges, feature selection techniques, and recognition engines. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training Support Vector Machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for a 7-way forced choice was 93% or more correct on two publicly available datasets, the best performance reported so far on these datasets. Surprisingly, registration of internal facial features was not necessary, even though the face detector does not provide precisely registered images. The outputs of the classifier change smoothly as a function of time and thus can be used for unobtrusive motion capture. We developed an end-to-end system that provides facial expression codes at 24 frames per second and animates a computer generated character. In real-time this expression mirror operates down to resolutions of 16 pixels from eye to eye. We also applied the system to fully automated facial action coding.


international conference on automatic face and gesture recognition | 2006

Fully Automatic Facial Action Recognition in Spontaneous Behavior

Marian Stewart Bartlett; Gwen Littlewort; Mark G. Frank; Claudia Lainscsek; Ian R. Fasel; Javier R. Movellan

We present results on a user independent fully automatic system for real time recognition of facial actions from the facial action coding system (FACS). The system automatically detects frontal faces in the video stream and codes each frame with respect to 20 action units. We present preliminary results on a task of facial action detection in spontaneous expressions during discourse. Support vector machines and AdaBoost classifiers are compared. For both classifiers, the output margin predicts action unit intensity


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Toward Practical Smile Detection

Jacob Whitehill; Gwen Littlewort; Ian R. Fasel; Marian Stewart Bartlett; Javier R. Movellan

Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.


computer vision and pattern recognition | 2004

Dynamics of Facial Expression Extracted Automatically from Video

Gwen Littlewort; Marian Stewart Bartlett; Ian R. Fasel; Joshua Susskind; Javier R. Movellan

We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions, including AdaBoost, support vector machines, and linear discriminant analysis. Each video-frame is first scanned in real-time to detect approximately upright-frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing spatial frequency ranges, feature selection techniques, and recognition engines. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training Support Vector Machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for a 7-way forced choice was 93% or more correct on two publicly available datasets, the best performance reported so far on these datasets. Surprisingly, registration of internal facial features was not necessary, even though the face detector does not provide precisely registered images. The outputs of the classifier change smoothly as a function of time and thus can be used for unobtrusive motion capture. We developed an end-to-end system that provides facial expression codes at 24 frames per second and animates a computer generated character. In real-time this expression mirror operates down to resolutions of 16 pixels from eye to eye. We also applied the system to fully automated facial action coding.


Image and Vision Computing | 2009

Automatic coding of facial expressions displayed during posed and genuine pain

Gwen Littlewort; Marian Stewart Bartlett; Kang Lee

We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. The real pain condition consisted of cold pressor pain induced by submerging the arm in ice water. Our goal was to (1) assess whether the automated measurements were consistent with expression measurements obtained by human experts, and (2) develop a classifier to automatically differentiate real from faked pain in a subject-independent manner from the automated measurements. We employed a machine learning approach in a two-stage system. In the first stage, a set of 20 detectors for facial actions from the Facial Action Coding System operated on the continuous video stream. These data were then passed to a second machine learning stage, in which a classifier was trained to detect the difference between expressions of real pain and fake pain. Naive human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 49% accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects with faked pain before real pain, the system obtained 88% correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial actions in the automated system were consistent with findings using human expert FACS codes.


systems, man and cybernetics | 2004

Machine learning methods for fully automatic recognition of facial expressions and facial actions

Marian Stewart Bartlett; Gwen Littlewort; Claudia Lainscsek; Ian R. Fasel; Javier R. Movellan

We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We explored recognition of facial actions from the facial action coding system (FACS), as well as recognition of fall facial expressions. Each video-frame is first scanned in real-time to detect approximately upright frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis, as well as feature selection techniques. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training support vector machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for recognition of full facial expressions in a 7-way forced choice was 93% correct, the best performance reported so far on the Cohn-Kanade FACS-coded expression dataset. We also applied the system to fully automated facial action coding. The present system classifies 18 action units, whether they occur singly or in combination with other actions, with a mean agreement rate of 94.5% with human FACS codes in the Cohn-Kanade dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics.

Collaboration


Dive into the Gwen Littlewort's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark G. Frank

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Kang Lee

University of Toronto

View shared research outputs
Top Co-Authors

Avatar

Tingfan Wu

University of California

View shared research outputs
Top Co-Authors

Avatar

Claudia Lainscsek

Salk Institute for Biological Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge