Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter W. McOwan is active.

Publication


Featured researches published by Peter W. McOwan.


Image and Vision Computing | 2009

Facial expression recognition based on Local Binary Patterns: A comprehensive study

Caifeng Shan; Shaogang Gong; Peter W. McOwan

Automatic facial expression analysis is an interesting and challenging problem, and impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. In this paper, we empirically evaluate facial representation based on statistical local features, Local Binary Patterns, for person-independent facial expression recognition. Different machine learning methods are systematically examined on several databases. Extensive experiments illustrate that LBP features are effective and efficient for facial expression recognition. We further formulate Boosted-LBP to extract the most discriminant LBP features, and the best recognition performance is obtained by using Support Vector Machine classifiers with Boosted-LBP features. Moreover, we investigate LBP features for low-resolution facial expression recognition, which is a critical problem but seldom addressed in the existing work. We observe in our experiments that LBP features perform stably and robustly over a useful range of low resolutions of face images, and yield promising performance in compressed low-resolution video sequences captured in real-world environments.


international conference on image processing | 2005

Robust facial expression recognition using local binary patterns

Caifeng Shan; Shaogang Gong; Peter W. McOwan

A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions. Our approach is based on the simple local binary patterns (LBP) for representing salient micro-patterns of face images. Compared to Gabor wavelets, the LBP features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. Template matching with weighted Chi square statistic and support vector machine are adopted to classify facial expressions. Extensive experiments on the Cohn-Kanade Database illustrate that the LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real-world applications where only low-resolution video input is available.


systems man and cybernetics | 2006

A real-time automated system for the recognition of human facial expressions

Keith Anderson; Peter W. McOwan

A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.


PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES , 250 (1329) pp. 297-306. (1992) | 1992

A computational model of the analysis of some first-order and second-order motion patterns by simple and complex cells.

Alan Johnston; Peter W. McOwan; Hilary Buxton

Although spatio-temporal gradient schemes are widely used in the computation of image motion, algorithms are ill conditioned for particular classes of input. This paper addresses this problem. Motion is computed as the space-time direction in which the difference in image illuminance from the local mean is conserved. This method can reliably detect motion in first-order and some second-order motion stimuli. Components of the model can be identified with directionally asymmetric and directionally selective simple cells. A stage in which we compute spatial and temporal derivatives of the difference between image illuminance and the local mean illuminance using a truncated Taylor series gives rise to a phase-invariant output reminiscent of the response of complex cells.


human-robot interaction | 2011

Automatic analysis of affective postures and body motion to detect engagement with a game companion

Jyotirmay Sanghvi; Ginevra Castellano; Iolanda Leite; André Pereira; Peter W. McOwan; Ana Paiva

The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.


international conference on multimodal interfaces | 2009

Detecting user engagement with a robot companion using task and social interaction-based features

Ginevra Castellano; André Pereira; Iolanda Leite; Ana Paiva; Peter W. McOwan

Affect sensitivity is of the utmost importance for a robot companion to be able to display socially intelligent behaviour, a key requirement for sustaining long-term interactions with humans. This paper explores a naturalistic scenario in which children play chess with the iCat, a robot companion. A person-independent, Bayesian approach to detect the users engagement with the iCat robot is presented. Our framework models both causes and effects of engagement: features related to the users non-verbal behaviour, the task and the companions affective reactions are identified to predict the childrens level of engagement. An experiment was carried out to train and validate our model. Results show that our approach based on multimodal integration of task and social interaction-based features outperforms those based solely on non-verbal behaviour or contextual information (94.79 % vs. 93.75 % and 78.13 %).


international conference on computer vision | 2005

Appearance manifold of facial expression

Caifeng Shan; Shaogang Gong; Peter W. McOwan

This paper investigates the appearance manifold of facial expression: embedding image sequences of facial expression from the high dimensional appearance feature space to a low dimensional manifold. We explore Locality Preserving Projections (LPP) to learn expression manifolds from two kinds of feature space: raw image data and Local Binary Patterns (LBP). For manifolds of different subjects, we propose a novel alignment algorithm to define a global coordinate space, and align them on one generalized manifold. Extensive experiments on 96 subjects from the Cohn-Kanade database illustrate the effectiveness of the alignment algorithm. The proposed generalized appearance manifold provides a unified framework for automatic facial expression analysis.


Proceedings of the Royal Society of London B: Biological Sciences | 1999

Robust velocity computation from a biologically motivated model of motion perception

Alan Johnston; Peter W. McOwan; Christopher P. Benton

Current computational models of motion processing in the primate motion pathway do not cope well with image sequences in which a moving pattern is superimposed upon a static texture. The use of non–linear operations and the need for contrast normalization in motion models mean that the separation of the influences of moving and static patterns on the motion computation is not trivial. Therefore, the response to the superposition of static and moving patterns provides an important means of testing various computational strategies. Here we describe a computational model of motion processing in the visual cortex, one of the advantages of which is that it is highly resistant to interference from static patterns.


british machine vision conference | 2007

Beyond Facial Expressions: Learning Human Emotion from Body Gestures

Caifeng Shan; Shaogang Gong; Peter W. McOwan

Vision-based human affect analysis is an interesting and challenging problem, impacting important applications in many areas. In this paper, beyond facial expressions, we investigate affective body gesture analysis in video sequences, a relatively understudied problem. Spatial-temporal features are exploited for modeling of body gestures. Moreover, we present to fuse facial expression and body gesture at the feature level using Canonical Correlation Analysis (CCA). By establishing the relationship between the two modalities, CCA derives a semantic “affect” space. Experimental results demonstrate the effectiveness of our approaches.


british machine vision conference | 2005

Conditional Mutual Infomation Based Boosting for Facial Expression Recognition

Caifeng Shan; Shaogang Gong; Peter W. McOwan

This paper proposes a novel approach for facial expression recognition by boosting Local Binary Patterns (LBP) based classifiers. L ow-cost LBP features are introduced to effectively describle local fea tures of face images. A novel learning procedure, Conditional Mutual Infomation based Boosting (CMIB), is proposed. CMIB learns a sequence of weak classifie rs that maximize their mutual information about a candidate class, conditional to the response of any weak classifier already selected; a strong cl assifier is constructed by combining the learned weak classifiers using the Naive-Bayes. Extensive experiments on the Cohn-Kanade database illustrated that LBP features are effective for expression analysis, and CMIB enables much faster training than AdaBoost, and yields a classifier of improved c lassification performance.

Collaboration


Dive into the Peter W. McOwan's collaboration.

Top Co-Authors

Avatar

Paul Curzon

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Alan Johnston

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caifeng Shan

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Shaogang Gong

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana Paiva

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

André Pereira

Technical University of Lisbon

View shared research outputs
Top Co-Authors

Avatar

Jonathan Black

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge