Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shichuan Du is active.

Publication


Featured researches published by Shichuan Du.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Compound facial expressions of emotion

Shichuan Du; Yong Tao; Aleix M. Martinez

Significance Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories—happiness, surprise, sadness, anger, fear, and disgust; the reason for this is grounded in the assumption that only these six categories are differentially represented by our cognitive and social systems. The results reported herein propound otherwise, suggesting that a larger number of categories is used by humans. Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories—happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.


Journal of Machine Learning Research | 2012

A model of the perception of facial expressions of emotion by humans: research overview and perspectives

Aleix M. Martinez; Shichuan Du

In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can aid in studies of human perception, social interactions and disorders.


Journal of Vision | 2011

The resolution of facial expressions of emotion

Shichuan Du; Aleix M. Martinez

Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.


Journal of Vision | 2013

Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

Shichuan Du; Aleix M. Martinez

Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10-20 ms), even at low resolutions. Fear and anger are recognized the slowest (100-250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70-200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models.


Dialogues in clinical neuroscience | 2015

Compound facial expressions of emotion: from basic research to clinical applications.

Shichuan Du; Aleix M. Martinez


Journal of Vision | 2010

How fast can we recognize facial expressions of emotion

Aleix M. Martinez; Shichuan Du


Gesture Recognition | 2017

A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives.

Aleix M. Martinez; Shichuan Du


Journal of Vision | 2014

Recognition of complex and realistic facial expressions of emotion

Shichuan Du; Pamela Pallett; Aleix M. Martinez


Journal of Vision | 2014

Brain Networks for the Categorization of Facial Expressions of Emotion

Aleix M. Martinez; Shichuan Du; Dirk Walther


Journal of Vision | 2013

Mostly Categorical but also Continuous Representation of Emotions in the Brain: An fMRI study

Shichuan Du; Dirk Walther; Aleix M. Martinez

Collaboration


Dive into the Shichuan Du's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yong Tao

Ohio State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge