Journal of King Saud University - Computer and Information Sciences | 2021

Emotion recognition from facial images with simultaneous occlusion, pose and illumination variations using meta-learning

 
 

Abstract


Abstract Automatic facial emotion recognition in real-world situations like partial occlusions, varying head poses and illumination conditions are challenging to the machine learning community. The main reason is the lack of sufficient samples with the aforementioned conditions in the baseline datasets which throws difficulty in training a well performing machine learning or deep learning model. To overcome this challenge, we have adopted the concept of meta-learning. Meta-learning using prototypical networks (metric-based meta-learning) has been proven to be well-fit for few-shot problems without severe overfitting. We leverage the quick adaptation power of prototypical networks for emotion recognition in the scarcity of such diverse samples. We have used CMU Multi-PIE dataset which contains images with partial occlusions, varying head-poses and illumination levels for training and evaluating the model. For testing the adaptability of the system to intra-class and inter-dataset variations, AffectNet face database images have been used. The proposed method is named as ERMOPI (Emotion Recognition using Meta-learning across Occlusion, Pose and Illumination) which performs emotion recognition from facial expressions using meta-learning approach for still images and it is robust to partial occlusions, varying head poses and illumination levels which is the novelty of this work. The key benefit is the usage of less number of training samples compared to the existing work in emotion recognition and achieved comparable results with the state-of-the-art approaches. The proposed method achieved 90% accuracy for CMU Multi-PIE database images and 68% accuracy for AffectNet database images.

Volume None
Pages None
DOI 10.1016/j.jksuci.2021.06.012
Language English
Journal Journal of King Saud University - Computer and Information Sciences

Full Text