Journal of Physics: Conference Series | 2021

Speech Emotion Recognition algorithm based on deep learning algorithm fusion of temporal and spatial features

 
 

Abstract


In recent years, human-computer interaction systems are gradually entering our lives. As one of the key technologies in human-computer interaction systems, Speech Emotion Recognition(SER) technology can accurately identify emotions and help machines better understand users’ intentions to improve the quality of human-computer interaction, which has received a lot of attention from researchers at home and abroad. With the successful application of deep learning in the fields of image recognition and speech recognition, scholars have started to try to use it in SER and have proposed many deep learning-based SER algorithms. In this paper, we conducted an in-depth study of these algorithms and found that they have problems such as too simple feature extraction methods, low utilization of human-designed features, high model complexity, and low accuracy of recognizing specific emotions. For the data processing, we quadrupled the RAVDESS dataset using additive Gaussian white noise (AWGN) for a total of 5760 audio samples. For the network structure, we build two parallel convolutional neural networks (CNN) to extract spatial features and a transformer encoder network to extract temporal features, classifying emotions from one of 8 classes. Taking advantage of CNN’s advantages in spatial feature representation and sequence encoding conversion, I obtained an accuracy of 80.46% on the hold-out test set of the RAVDESS data set.

Volume 1861
Pages None
DOI 10.1088/1742-6596/1861/1/012064
Language English
Journal Journal of Physics: Conference Series

Full Text