IEEE Signal Processing Letters | 2021

MIND-Net: A Deep Mutual Information Distillation Network for Realistic Low-Resolution Face Recognition

 
 
 

Abstract


Realistic low-resolution (LR) face images refer to those captured by the real-world surveillance cameras at extreme standoff distances, thereby LR and poor in quality essentially. Owing to severe scarcity of labeled data, a high-capacity deep convolution neural networks (CNN) is hardly trained to confront the realistic LR face recognition (LRFR) challenge. We introduce in this letter a dual-stream mutual information distillation network (MIND-Net), whereby the non-identity specific mutual information (MI) characterized by generic face features coexistent on realistic and synthetic LR face images are distilled to render a resolution-invariant embedding space for LRFR. For a thorough analysis, we quantify the degree of MI distillation in terms normalized MI index. Our experimental results on the realistic LR face datasets substantiate that the MIND-Net instances assembled from the pre-learned CNNs stand out from the baselines and other state of the arts by a notable margin.

Volume 28
Pages 354-358
DOI 10.1109/LSP.2021.3053480
Language English
Journal IEEE Signal Processing Letters

Full Text