IEEE Sensors Journal | 2021

Facial Expression and EEG Fusion for Investigating Continuous Emotions of Deaf Subjects

 
 
 
 
 
 

Abstract


Emotion recognition has received increasing attention in human-computer interaction (HCI) and psychological assessment. Compared with single modal emotion recognition, the multimodal paradigm has an outperformance because of introducing complementary information for emotion recognition. However, current research is mainly focused on normal people, the deaf subjects need to understand emotional changes in real life. In this paper, we propose a multimodal continuous emotion recognition method for deaf subjects based on facial expression and electroencephalograph (EEG) signals. Twelve emotion movie clips as stimulus were selected and annotated by ten postgraduates who majored in psychology. The EEG signals and facial expressions of deaf subjects were collected when they watched the stimulus clips. The differential entropy (DE) features of EEG were extracted by time-frequency analysis and the six facial features were extracted by facial landmark. Long short-term memory networks (LSTM) were utilized to accomplish the feature level fusion and captured the temporal dynamics of emotions. The result shows that the EEG signal is better than the dynamic emotional capture of facial expressions and deaf subjects in continuous emotion recognition. Multi-modality can compensate for the information of a single modality, which achieves a better performance. Finally, from the neural activities of deaf subjects, the result reveals that the prefrontal lobe region may be strongly related to negative emotion processing, and the lateral temporal lobe region may be strongly related to positive emotion processing.

Volume 21
Pages 16894-16903
DOI 10.1109/JSEN.2021.3078087
Language English
Journal IEEE Sensors Journal

Full Text