Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wen-Jing Yan is active.

Publication


Featured researches published by Wen-Jing Yan.


Neural Processing Letters | 2014

Face Recognition and Micro-expression Recognition Based on Discriminant Tensor Subspace Analysis Plus Extreme Learning Machine

Su-Jing Wang; Huiling Chen; Wen-Jing Yan; Yu-Hsin Chen; Xiaolan Fu

In this paper, a novel recognition algorithm based on discriminant tensor subspace analysis (DTSA) and extreme learning machine (ELM) is introduced. DTSA treats a gray facial image as a second order tensor and adopts two-sided transformations to reduce dimensionality. One of the many advantages of DTSA is its ability to preserve the spatial structure information of the images. In order to deal with micro-expression video clips, we extend DTSA to a high-order tensor. Discriminative features are generated using DTSA to further enhance the classification performance of ELM classifier. Another notable contribution of the proposed method includes significant improvements in face and micro-expression recognition accuracy. The experimental results on the ORL, Yale, YaleB facial databases and CASME micro-expression database show the effectiveness of the proposed method.


european conference on computer vision | 2014

Micro-Expression Recognition Using Robust Principal Component Analysis and Local Spatiotemporal Directional Features

Su-Jing Wang; Wen-Jing Yan; Guoying Zhao; Xiaolan Fu; Chun-Guang Zhou

One of important cues of deception detection is micro-expression. It has three characteristics: short duration, low intensity and usually local movements. These characteristics imply that micro-expression is sparse. In this paper, we use the sparse part of Robust PCA (RPCA) to extract the subtle motion information of micro-expression. The local texture features of the information are extracted by Local Spatiotemporal Directional Features (LSTD). In order to extract more effective local features, 16 Regions of Interest (ROIs) are assigned based on the Facial Action Coding System (FACS). The experimental results on two micro-expression databases show the proposed method gain better performance. Moreover, the proposed method may further be used to extract other subtle motion information (such as lip-reading, the human pulse, and micro-gesture etc.) from video.


Neurocomputing | 2014

For micro-expression recognition: Database and suggestions

Wen-Jing Yan; Su-Jing Wang; Yong-Jin Liu; Qi Wu; Xiaolan Fu

Micro-expression is gaining more attention in both the scientific field and the mass media. It represents genuine emotions that people try to conceal, thus making it a promising cue for lie detection. Since micro-expressions are considered almost imperceptible to naked eyes, researchers have sought to automatically detect and recognize these fleeting facial expressions to help people make use of such deception cues. However, the lack of well-established micro-expression databases might be the biggest obstacle. Although several databases have been developed, there may exist some problems either in the approach of eliciting micro-expression or the labeling. We built a spontaneous micro-expression database with rigorous frame spotting, AU coding and micro-expression labeling. This paper introduces how the micro-expressions were elicited in a laboratory situation and how the database was built with the guide of psychology. In addition, this paper proposes issues that may help researchers effectively use micro-expression databases and improve micro-expression recognition


PLOS ONE | 2013

Amygdala Volume Predicts Inter-Individual Differences in Fearful Face Recognition

Ke Zhao; Wen-Jing Yan; Yu-Hsin Chen; Xi-Nian Zuo; Xiaolan Fu

The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli.


european conference on computer vision | 2014

Quantifying Micro-expressions with Constraint Local Model and Local Binary Pattern

Wen-Jing Yan; Su-Jing Wang; Yu-Hsin Chen; Guoying Zhao; Xiaolan Fu

Micro-expression may reveal genuine emotions that people try to conceal. However, it’s difficult to measure it. We selected two feature extraction methods to analyze micro-expressions by assessing the dynamic information. The Constraint Local Model (CLM) algorithm is employed to detect faces and track feature points. Based on these points, the ROIs (Regions of Interest) on the face are drawn for further analysis. In addition, Local Binary Pattern (LBP) algorithm is employed to extract texture information from the ROIs and measure the differences between frames. The results from the proposed methods are compared with manual coding. These two proposed methods show good performance, with sensitivity and reliability. This is a pilot study on quantifying micro-expression movement for psychological research purpose. These methods would assist behavior researchers in measuring facial movements on various facets and at a deeper level.


PLOS ONE | 2013

To Bind or Not to Bind? Different Temporal Binding Effects from Voluntary Pressing and Releasing Actions

Ke Zhao; Yu-Hsin Chen; Wen-Jing Yan; Xiaolan Fu

Binding effect refers to the perceptual attraction between an action and an outcome leading to a subjective compression of time. Most studies investigating binding effects exclusively employ the “pressing” action without exploring other types of actions. The present study addresses this issue by introducing another action, releasing action or the voluntary lifting of the finger/wrist, to investigate the differences between voluntary pressing and releasing actions. Results reveal that releasing actions led to robust yet short-lived temporal binding effects, whereas pressing condition had steady temporal binding effects up to super-seconds. The two actions also differ in sensitivity to changes in temporal contiguity and contingency, which could be attributed to the difference in awareness of action. Extending upon current models of “willed action,” our results provide insights from a temporal point of view and support the concept of a dual system consisting of predictive motor control and top-down mechanisms.


Journal of Health Psychology | 2017

Perfectionism and adolescent sleep quality: The mediating role of repetitive negative thinking

Rong-Mao Lin; Shan-Shan Xie; You-Wei Yan; Yu-Hsin Chen; Wen-Jing Yan

This study explores the mediating effects of repetitive negative thinking in the relationship between perfectionism and adolescent sleep quality. A sample of 1664 Chinese adolescents with a mean age of 15.0 years was recruited, and they completed four measures relating to perfectionism, sleep quality, worry, and rumination. The results showed that maladaptive perfectionism was positively correlated with poor sleep quality in adolescents, which was mediated by both worry and rumination. However, adaptive perfectionism was not significantly associated with adolescent sleep quality, and this relationship was suppressed by rumination (but not worry). The implications of these results are also discussed.


fuzzy systems and knowledge discovery | 2015

Recognizing fleeting facial expressions with different viewpoints

Xunbing Shen; Wen-Jing Yan; Xiaolan Fu

Most research of facial expression recognition used static, front view and long-lasting stimuli of expressions. A paucity of research exists concerning recognition of the fleeting expressions from different viewpoints. To investigate how duration and viewpoints together influence the expression recognition, we employed expressions with two different viewpoints (three-quarters and profile views) and showed them to the participants transiently. The duration of expressions was one of the following: 20, 40, 80, 120, 160, 200, 240, or 280 ms. In experiment 1, we used static facial expressions; In experiment 2, we added dynamic information by adding two neutral expressions before and after the emotional expressions. The results showed an interaction effect between viewpoint and duration on expression recognition. Furthermore, we found that happiness is the easiest expression to recognize even under the conditions of fleeting presentation and side-view. This study informed the automatic expression recognition of human data under conditions of short duration and different viewpoints.


Cognition & Emotion | 2018

Effects of task-irrelevant emotional information on deception

Jing Liang; Yu-Hsin Chen; Wen-Jing Yan; Fangbing Qu; Xiaolan Fu

ABSTRACT Deception has been reported to be influenced by task-relevant emotional information from an external stimulus. However, it remains unclear how task-irrelevant emotional information would influence deception. In the present study, facial expressions of different valence and emotion intensity were presented to participants, where they were asked to make either truthful or deceptive gender judgments according to the preceding cues. We observed the influence of facial expression intensity upon individuals’ cognitive cost of deceiving (mean difference of individuals’ truthful and deceptive response times). Larger cost was observed for high intensity faces compared to low intensity faces. These results provided insights on how automatic attraction of attention evoked by task-irrelevant emotional information in facial expressions influenced individuals’ cognitive cost of deceiving.


PLOS ONE | 2014

CASME II: An Improved Spontaneous Micro-Expression Database and the Baseline Evaluation

Wen-Jing Yan; Xiaobai Li; Su-Jing Wang; Guoying Zhao; Yong-Jin Liu; Yu-Hsin Chen; Xiaolan Fu

Collaboration


Dive into the Wen-Jing Yan's collaboration.

Top Co-Authors

Avatar

Xiaolan Fu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yu-Hsin Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Su-Jing Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qi Wu

Hunan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fangbing Qu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jing Liang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge