Journal of Physics: Conference Series | 2021

Blur Regional Features Based Infrared And Visible Image Fusion Using An Improved C3Net Model

 
 
 
 

Abstract


For ameliorate the drawback that useful information obtained through middle layers is lost in the conventional image fusion methods based on deep learning, an unsupervised deep learning framework based on Cascaded Convolutional Coding Networks (C3Net) is proposed for the fusion of infrared and visual images. A Blur Regional Features (BRF) scheme is also considered during fusion stage, so as to preserve the consistency of regions. Firstly, redundant and complementary features of infrared and visible images are obtained from the coding layer respectively. The output of each convolutional layer is connected to the input of the next layer in a cascading manner. Then, relying on the features of redundant features and complementary features, different fusion strategies are designed respectively based on BRF to obtain fusion feature maps. Finally, the fused image is reconstructed by decoding layer. Furthermore, the objective function of the training model is designed as a multitask loss function including Mean Square Error, Information Entropy and Structural Similarity, to reduce the loss of the original image information. The experimental results of C3Net fusion method is compared with state-of-the-art fusion methods, which is better synthesized performance in objective evaluation and subjective visual quality.

Volume 1820
Pages None
DOI 10.1088/1742-6596/1820/1/012169
Language English
Journal Journal of Physics: Conference Series

Full Text