Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xunbing Shen is active.

Publication


Featured researches published by Xunbing Shen.


affective computing and intelligent interaction | 2011

The machine knows what you are hiding: an automatic micro-expression recognition system

Qi Wu; Xunbing Shen; Xiaolan Fu

Micro-expressions are one of the most important behavioral clues for lie and dangerous demeanor detections. However, it is difficult for humans to detect micro-expressions. In this paper, a new approach for automatic microexpression recognition is presented. The system is fully automatic and operates in frame by frame manner. It automatically locates the face and extracts the features by using Gabor filters. GentleSVM is then employed to identify microexpressions. As for spotting, the system obtained 95.83% accuracy. As for recognition, the system showed 85.42% accuracy which was higher than the performance of trained human subjects. To further improve the performance, a more representative training set, a more sophisticated testing bed, and an accurate image alignment method should be focused in future research.


Journal of Zhejiang University-science B | 2012

Effects of the duration of expressions on the recognition of microexpressions

Xunbing Shen; Qi Wu; Xiaolan Fu

ObjectiveThe purpose of this study was to investigate the effects of the duration of expressions on the recognition of microexpressions, which are closely related to deception.MethodsIn two experiments, participants were briefly (from 20 to 300 ms) shown one of six basic expressions and then were asked to identify the expression.ResultsThe results showed that the participants’ performance in recognition of microexpressions increased with the duration of the expressions, reaching a turning point at 200 ms before levelling off. The results also indicated that practice could improve the participants’ performance.ConclusionsThe results of this study suggest that the proper upper limit of the duration of microexpressions might be around 1/5 of a second and confirmed that the ability to recognize microexpressions can be enhanced with practice.


Experimental Aging Research | 2016

Exploring the Cognitive Processes Causing the Age-Related Categorization Deficit in the Recognition of Facial Expressions

Min-Fang Zhao; Hubert D. Zimmer; Xunbing Shen; Wenfeng Chen; Xiaolan Fu

Background/Study Context: Elderly people do not categorize emotional facial expressions as accurately as younger people, particularly negative emotions. Although age-related impairments in decoding emotions in facial expressions are well documented, the causes of this deficit are poorly understood. This study examined the potential mechanisms that account for this age-related categorization deficit by assessing its dependence on presentation time. Methods: Thirty young (19–27 years old) and 31 older (68–78 years old) Chinese adults were asked to categorize the six basic emotions in facial expressions, each presented for 120, 200, 600, or 1000 ms, before and after exposure to a neutral facial expression. Results: Shortened presentation times caused an age-related deficit in the recognition of happy faces, whereas no deficit was observed at longer exposure times. An age-related deficit was observed for all negative emotions but was not exacerbated by shorter presentation times. Conclusion: Age-related deficits in categorization of positive and negative emotions are caused by different mechanisms. Because negative emotions are perceptually similar, they cause high categorization demands. Elderly people may need more evidence in favor of the target emotion than younger people, and they make mistakes if this surplus of evidence is missing. In contrast, perceptually distinct happy faces were easily identified, and elderly people only failed when the presentation time was too short for their slower perceptual processing.


Frontiers in Psychology | 2016

Electrophysiological Evidence Reveals Differences between the Recognition of Microexpressions and Macroexpressions

Xunbing Shen; Qi Wu; Ke Zhao; Xiaolan Fu

Microexpressions are fleeting facial expressions that are important for judging people’s true emotions. Little is known about the neural mechanisms underlying the recognition of microexpressions (with duration of less than 200 ms) and macroexpressions (with duration of greater than 200 ms). We used an affective priming paradigm in which a picture of a facial expression is the prime and an emotional word is the target, and electroencephalogram (EEG) and event-related potentials (ERPs) to examine neural activities associated with recognizing microexpressions and macroexpressions. The results showed that there were significant main effects of duration and valence for N170/vertex positive potential. The main effect of congruence for N400 is also significant. Further, sLORETA showed that the brain regions responsible for these significant differences included the inferior temporal gyrus and widespread regions of the frontal lobe. Furthermore, the results suggested that the left hemisphere was more involved than the right hemisphere in processing a microexpression. The main effect of duration for the event-related spectral perturbation (ERSP) was significant, and the theta oscillations (4 to 8 Hz) increased in recognizing expressions with a duration of 40 ms compared with 300 ms. Thus, there are different EEG/ERPs neural mechanisms for recognizing microexpressions compared to recognizing macroexpressions.


international conference on natural computation | 2010

Do different emotional valences have same effects on spatial attention

Xunbing Shen; Xiaolan Fu; Yuming Xuan

Emotional stimuli have a priority to be processed relative to neutral stimuli. However, it is still unclear whether different emotions have similar or distinct influences on attention. We conducted three experiments to answer the question, which used three emotion valences: positive, negative and neutral. Pictures of money, snake, lamp and letter x were used as stimuli in Experiment 1. In Experiment 2A, schematic emotional faces (angry, smile and neutral face) were used as experimental stimuli to control the stimuli complexity. In Experiment 2B, stimuli were three line drawing pictures selected from the Chinese Version of Abbreviated PAD Emotion Scales, corresponding respectively to anger, joy and neutral emotion. We employed the paradigm of inhibition of return (IOR, an effect on spatial attention that people are slow to react to stimuli which appear at recently attended locations, cf. Posner & Cohen, 1984) which used exogenous cues and included 20% catch trials. Seventy-four university students participated in the experiments. We found that participants needed more time to process negative emotional pictures (Exp1, 2A&2B), and the effect of IOR could happen at the ISI (interstimulus interval) as short as 50ms (Exp1). Meanwhile, the data demonstrated that IOR happened at 50ms ISI only when the schematic face was angry, and RTs of angry schematic faces were significantly longer than RTs of the other two faces (Exp2A). We further found that the expectancy might play a role in explaining these results (Exp3). In all three experiments, we found consistently there was a U-shaped relationship between RT and ISI, irrespective of the cue validity and emotional valence. These results showed that different emotional valences had distinct influences on attention. To be specific positive and neutral emotions could be processed more rapidly than the negative emotion.


fuzzy systems and knowledge discovery | 2015

Recognizing fleeting facial expressions with different viewpoints

Xunbing Shen; Wen-Jing Yan; Xiaolan Fu

Most research of facial expression recognition used static, front view and long-lasting stimuli of expressions. A paucity of research exists concerning recognition of the fleeting expressions from different viewpoints. To investigate how duration and viewpoints together influence the expression recognition, we employed expressions with two different viewpoints (three-quarters and profile views) and showed them to the participants transiently. The duration of expressions was one of the following: 20, 40, 80, 120, 160, 200, 240, or 280 ms. In experiment 1, we used static facial expressions; In experiment 2, we added dynamic information by adding two neutral expressions before and after the emotional expressions. The results showed an interaction effect between viewpoint and duration on expression recognition. Furthermore, we found that happiness is the easiest expression to recognize even under the conditions of fleeting presentation and side-view. This study informed the automatic expression recognition of human data under conditions of short duration and different viewpoints.


International Conference on Man-Machine-Environment System Engineering | 2017

The Effects of the Micro-Expression Training on Empathy in Patients with Schizophrenia

Xueling Zhang; Lei Chen; Zhibing Zhong; Huajie Sui; Xunbing Shen

Object Studies have shown that patients with schizophrenia had impaired in empathy, which is important during social interaction. How to improve the empathy of schizophrenia patients was of practical significance. This study used Micro-expression training tool to improve the empathy in patients with schizophrenia. Methods Experimental group consisted of 24 patients (10 females), who accepted a completely micro-expression training for 4 times, once a week, each training last for 1–1.5 h, and the control group consisted of 22 patients (10 females), who merely watched the teaching videos about facial expression for 4 times, each time last for half an hour; and the third group, acted as a baseline, consisted of 22 patients (10 females), who were not given any intervention. The scores of empathy at pre-training and post-training were measured by Interpersonal Reactivity Index (IRI). Results The empathy of experimental group and control group I had improved, which suggested the training of micro-expression recognition could improve empathy in patients with schizophrenia.


international conference on audio language and image processing | 2016

The effects of language similarity on bilinguals' speech production

Zhanling Cui; Xunbing Shen

The authors carried out three experiments exploring the influence of language similarity on language selection mechanisms. In Experiment 1, The participants were asked to perform the task of language-switching between their two dissimilar but highly proficient languages (Tibetan-Mandarin) ,in which they had to name the pictures quickly and accurately by using the cued language in the picture-word interference paradigm. In Experiment 2 and 3, the partcipants finished the same tasks except that they switched the languages between a more-proficient language (Tibetan or Mandarin) and dissimilar and less proficient language (English). The results showed there was no asymmetrical switching cost between Tibetan and Mandarin; and there was asymmetrical switching cost between non-fluent and proficient languages; meanwhile, language similarity affected speech production for non-proficient bilinguals. The results suggested that language similarity may play a role in the lexical selection mechanisms used by highly proficient bilinguals.


biomedical circuits and systems conference | 2016

The time — frequency characteristics of EEG activities while recognizing microexpressions

Xunbing Shen; Huajie Sui

Microexpressions (with duration of less than 200 ms) are important for judging peoples true emotions. Little is known regarding the time — frequency characteristics of EEG activities while recognizing microexpressions. We examined the neural activities associated with recognizing microexpressions and macroexpressions (with duration of greater than 200 ms). The event-related Spectral Perturbation (ERSP) were entered into a repeated measures ANOVA, the results showed that the main effect of duration for the event-related spectral perturbation were significant, and the theta oscillations (4 Hz to 8 Hz) increased in recognizing expressions with a duration of 40 ms compared with 300 ms. The results showed that the recognition of microexpressions rely on different brain mechanisms than that of macroexpressions.


PLOS ONE | 2012

I Undervalue You but I Need You: The Dissociation of Attitude and Memory Toward In-Group Members

Ke Zhao; Qi Wu; Xunbing Shen; Yuming Xuan; Xiaolan Fu

Collaboration


Dive into the Xunbing Shen's collaboration.

Top Co-Authors

Avatar

Xiaolan Fu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qi Wu

Hunan Normal University

View shared research outputs
Top Co-Authors

Avatar

Huajie Sui

Jiangxi University of Traditional Chinese Medicine

View shared research outputs
Top Co-Authors

Avatar

Ke Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lei Chen

Jiangxi University of Traditional Chinese Medicine

View shared research outputs
Top Co-Authors

Avatar

Yuming Xuan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hai Yang

Jiangxi University of Traditional Chinese Medicine

View shared research outputs
Top Co-Authors

Avatar

Min-Fang Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wen-Jing Yan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wenfeng Chen

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge