2021 International Russian Automation Conference (RusAutoCon) | 2021

Touching the Limits of a Dataset in Video-Based Facial Expression Recognition

 
 

Abstract


In this paper, we examine the issue of video-based facial emotion recognition algorithms which show excellent performance on some benchmarks, but have much worse accuracy in practical applications. For example, the typical error rate of contemporary deep neural networks on the RAVDESS dataset is less than 5%. We argue that such results are obtained only if the split of the whole dataset is incorrect, so that the same persons are present in both training and test sets. It is claimed that it is more frankly to use the actor-based split, in which persons in the training and test sets are disjoint. It is experimentally demonstrated that the near state-of-the-art neural network model pre-trained on the AffectNet dataset achieves 99% accuracy on conventional split of the RAVDESS dataset. However, when we split the dataset by the actors and training and testing sets have only unique persons then the accuracy will be 20-30% lower.

Volume None
Pages 633-638
DOI 10.1109/RusAutoCon52004.2021.9537388
Language English
Journal 2021 International Russian Automation Conference (RusAutoCon)

Full Text