IEEE Sensors Journal | 2021

Emotion Recognition From EEG Signals of Hearing-Impaired People Using Stacking Ensemble Learning Framework Based on a Novel Brain Network

 
 
 
 
 
 
 

Abstract


Emotion recognition based on electroencephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on hearing-impaired subjects. In this work, we have collected the EEG signals of 15 hearing-impaired subjects for categorizing three types of emotions (positive, neutral, and negative). To study the differences in functional connectivity between normal and hearing-impaired subjects under different emotional states, a novel brain network stacking ensemble learning framework was proposed. The phase-locking value (PLV) was utilized to calculate the correlation between EEG channels, and then we constructed a brain network using double thresholds. The spatial features of the brain network were extracted from the perspectives of local differentiation and global integration. In addition, the stacking ensemble learning framework was used to classify the fused features. To evaluate the proposed model, extensive experiments were carried out on the SEED dataset, and the result shows that the proposed method achieved superior performance than state-of-the-art models, in which the average classification accuracy is 0.955 (std: 0.052). In addition, the experimental results of hearing-impaired emotion recognition show that the average classification accuracy is 0.984 (std: 0.005). Finally, we investigated the activation patterns to reveal important brain regions and inter-channel relations about emotion recognition.

Volume 21
Pages 23245-23255
DOI 10.1109/jsen.2021.3108471
Language English
Journal IEEE Sensors Journal

Full Text