Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yachen Zhu is active.

Publication


Featured researches published by Yachen Zhu.


Multimedia Tools and Applications | 2014

Hybrid video emotional tagging using users' EEG and video content

Shangfei Wang; Yachen Zhu; Guobing Wu; Qiang Ji

In this paper, we propose novel hybrid approaches to annotate videos in valence and arousal spaces by using users’ electroencephalogram (EEG) signals and video content. Firstly, several audio and visual features are extracted from video clips and five frequency features are extracted from each channel of the EEG signals. Secondly, statistical analyses are conducted to explore the relationships among emotional tags, EEG and video features. Thirdly, three Bayesian Networks are constructed to annotate videos by combining the video and EEG features at independent feature-level fusion, decision-level fusion and dependent feature-level fusion. In order to evaluate the effectiveness of our approaches, we designed and conducted the psychophysiological experiment to collect data, including emotion-induced video clips, users’ EEG responses while watching the selected video clips, and emotional video tags collected through participants’ self-report after watching each clip. The experimental results show that the proposed fusion methods outperform the conventional emotional tagging methods that use either video or EEG features alone in both valence and arousal spaces. Moreover, we can narrow down the semantic gap between the low-level video features and the users’ high-level emotional tags with the help of EEG features.


international conference on pattern recognition | 2014

Multiple-Facial Action Unit Recognition by Shared Feature Learning and Semantic Relation Modeling

Yachen Zhu; Shangfei Wang; Lihua Yue; Qiang Ji

In this paper, we propose multiple facial action unit recognition by modeling their relations from both features and target labels. First, a multi-task feature learning method is adopted to divide action unit recognition tasks into several groups, and then learn the shared features for each group. Second, a Bayesian network is used to model the co-existent and mutual-exclusive semantic relations among action units from the target labels of facial images. After that, the learned Bayesian network employs the recognition results of the multi-task learning, and realizes multiple facial action recognition by probabilistic inference. Experiments on the extended Cohn-Kanade database and the Denver Intensity of Spontaneous Facial Actions database demonstrate the effectiveness of our approach.


Multimedia Tools and Applications | 2015

Implicit video emotion tagging from audiences' facial expression

Shangfei Wang; Zhilei Liu; Yachen Zhu; Menghua He; Xiaoping Chen; Qiang Ji

In this paper, we propose a novel implicit video emotion tagging approach by exploring the relationships between videos’ common emotions, subjects’ individualized emotions and subjects’ outer facial expressions. First, head motion and face appearance features are extracted. Then, the spontaneous facial expressions of subjects are recognized by Bayesian networks. After that, the relationships between the outer facial expressions, the inner individualized emotions and the video’s common emotions are captured by another Bayesian network, which can be used to infer the emotional tags of videos. To validate the effectiveness of our approach, an emotion tagging experiment is conducted on the NVIE database. The experimental results show that head motion features improve the performance of both facial expression recognition and emotion tagging, and that the captured relations between the outer facial expressions, the inner individualized emotions and the common emotions improve the performance of common and individualized emotion tagging.


Frontiers of Computer Science in China | 2015

Learning with privileged information using Bayesian networks

Shangfei Wang; Menghua He; Yachen Zhu; Shan He; Yue Liu; Qiang Ji

For many supervised learning applications, additional information, besides the labels, is often available during training, but not available during testing. Such additional information, referred to the privileged information, can be exploited during training to construct a better classifier. In this paper, we propose a Bayesian network (BN) approach for learning with privileged information. We propose to incorporate the privileged information through a three-node BN. We further mathematically evaluate different topologies of the three-node BN and identify those structures, through which the privileged information can benefit the classification. Experimental results on handwritten digit recognition, spontaneous versus posed expression recognition, and gender recognition demonstrate the effectiveness of our approach.


IAS (2) | 2013

Analysis of Affective Effects on Steady-State Visual Evoked Potential Responses

Shangfei Wang; Guobing Wu; Yachen Zhu

This paper aims to investigate the effect of different emotional states on healthy subjects’ steady-state visual evoked potential responses. First, affective steady-state visual evoked response experiments are designed and implemented. Emotion eliciting pictures selected from the International Affective Picture System are flickered on the four directions of the screen at the frequency of 10Hz, 11Hz, 12Hz and 15Hz respectively. Subjects’ EEG signals are recorded by a Quik-cap simultaneously. After that, spectral density analysis and canonical correlation analysis are conducted across trials respectively to extract features. Then a one-way analysis of variance is performed to evaluate the effect of different emotional states on subjects’ steady-state visual evoked potential responses. Results show that there exist significant differences between steady-state visual potential response under different emotional states. Both positive and negative emotions enhance subjects’ steady-state visual evoked potential responses. Thus it is easier to detect subjects response under positive and negative emotional states than that under neutral state.


international conference on pattern recognition | 2016

Employing subjects' information as privileged information for emotion recognition from EEG signals

Shan Wu; Shangfei Wang; Yachen Zhu; Zhen Gao; Lihua Yue; Qiang Ji

Current research of emotion recognition from electroencephalogram (EEG) signals rarely considers common patterns embodied in multiple subjects and individual patterns for each subject simultaneously. Therefore, in this paper, we propose a novel emotion recognition approach using subjects or subject groups as privileged information, which is only available during training. First, five frequency features are extracted from each channel of the EEG signals, and features are selected by statistical tests. Then, we propose two three-node Bayesian networks to capture the joint probability distribution function of emotion labels, EEG features, and subjects or subject groups during training. Through the learned joint probability distribution, the Bayesian networks model both common and individual emotion patterns simultaneously. During testing, emotion labels can be estimated from EEG features only by marginalized over the privileged information, i.e. subjects or subject groups. Experimental results on three benchmark databases, i.e. the MAHNOB-HCI database, the DEAP database and the USTC-ERVS database, demonstrate that our approach incorporating subjects and clusters achieves better emotion recognition performance than training a classifier for each subject, as well as training a classifier without subject information on the whole dataset.


IEEE Transactions on Autonomous Mental Development | 2015

Emotion Recognition with the Help of Privileged Information

Shangfei Wang; Yachen Zhu; Lihua Yue; Qiang Ji


international conference on multimedia and expo | 2014

Emotion recognition from users' EEG signals with the help of stimulus VIDEOS

Yachen Zhu; Shangfei Wang; Qiang Ji


affective computing and intelligent interaction | 2013

Emotional Influence on SSVEP Based BCI

Yachen Zhu; Xilan Tian; Guobing Wu; Gilles Gasso; Shangfei Wang; Stéphane Canu


international conference on multimedia retrieval | 2015

Expression Recognition from Visible Images with the Help of Thermal Images

Xiaoxiao Shi; Shangfei Wang; Yachen Zhu

Collaboration


Dive into the Yachen Zhu's collaboration.

Top Co-Authors

Avatar

Shangfei Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Guobing Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Lihua Yue

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Menghua He

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Shan He

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Shan Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Xiaoping Chen

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Xiaoxiao Shi

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yue Liu

University of Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge