Comput. Electr. Eng. | 2021

A new proposed statistical feature extraction method in speech emotion recognition

 
 
 

Abstract


Abstract Feature extraction is the most important step in pattern recognition systems, and researchers have extensively focused on this field. This work aims to design and implement a novel feature extraction method that can extract features to recognize different emotions. Through this work, a unimodal speech, real-time, gender and speaker independent speech emotion recognition (SER) framework has been designed and implemented using the newly proposed extracted statistical features. This work’s contribution to feature extraction is the approach followed in extracting the statistical feature that used many degrees of the standard deviation (SD) on either side of the mean rather than 2 SDs on either side of the mean, as all researchers did in the past. In this work, the degrees of deviation on either side of the mean to study the feature distribution variance around the mean are (0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, 2.75, 3, 3.5 and 4). The data sets used in this work were the Ryerson Audio-Visual Database of Emotional Speech and Song dataset (RAVDESS) with eight emotions, the Berlin dataset (Emo-DB) with seven emotions and the Surrey Audio-Visual Expressed Emotion dataset (SAVEE) with seven emotions. Compared to the state-of-the-art unimodal SER approaches, the classification accuracy achieved in this work was near perfect at 86.1%, 96.3% and 91.7% for the RAVDESS, Emo-DB and SAVEE datasets, respectively.

Volume 93
Pages 107172
DOI 10.1016/J.COMPELECENG.2021.107172
Language English
Journal Comput. Electr. Eng.

Full Text