Biomed. Signal Process. Control. | 2021

MVFNet: A multi-view fusion network for pain intensity assessment in unconstrained environment

 
 

Abstract


Abstract Pain is an indication of physical discomfort, and its monitoring is crucial for medical diagnosis and treatment of the patient. In the past few years, several techniques are proposed for pain assessment from face images. Although existing approaches provide satisfactory performance on constrained frontal faces, they might perform poorly in the natural unconstrained hospital environment due to low illumination conditions, large head-pose rotation, and occlusion which is common in an unconstrained environment. Therefore, a novel fusion approach to constitute discriminative features for pain severity assessment is proposed. In this work, decision level fusion of three distinct features, i.e., data-driven RGB features, entropy based texture features, and complementary features learned from both RGB and texture data are utilized to improve the generalization of the proposed pain assessment system. The experimental results demonstrate that the decision level fusion using these Multi-view features substantially outperforms the model trained with generic RGB data. Given this, the proposed system utilizes three CNNs, i.e., VGG-CNN based on cross dataset Transfer Learning (VGG-TL), Entropy Texture Network (ETNet), and Dual Stream CNN (DSCNN). Further, to alleviate the problem of overfitting various augmentation techniques are implemented. Furthermore, the proposed approach has been assessed extensively on self-generated datasets of 10 patients recorded in an unconstrained hospital environment. The experimental results demonstrate that the proposed model achieved 94.0 % of F1-score for pain severity assessment. In addition, to evaluate the generalization of the proposed method we also report competitive results in the UNBC-McMaster dataset.

Volume 67
Pages 102537
DOI 10.1016/J.BSPC.2021.102537
Language English
Journal Biomed. Signal Process. Control.

Full Text