IEEE Transactions on Computational Imaging | 2021

GAN-FM: Infrared and Visible Image Fusion Using GAN With Full-Scale Skip Connection and Dual Markovian Discriminators

 
 
 
 

Abstract


A good result of infrared and visible image fusion should not only maintain significant contrast for distinguishing targets from the backgrounds, but also contain rich scene textures to cater for human visual perception. However, previous fusion methods usually do not fully utilize the information, and hence their fused results sacrifice either the salience of thermal targets or the sharpness of textures. To address this challenge, we propose a novel Generative Adversarial Network with Full-scale skip connection and dual Markovian discriminators (GAN-FM) to fully preserve effective information in infrared and visible images. First, a full-scale skip connected generator is designed to extract and fuse deep features of different scales, which can promote the direct transmission of shallow high-contrast features to the deep level, preserving the thermal radiation targets from the semantic level. As a result, the fused image can maintain significant contrast. Second, we propose two Markovian discriminators to establish adversarial games with the generator, so as to estimate probability distributions of infrared and visible modalities at the same time. Unlike conventional global discriminator, the Markovian discriminators try to distinguish each patch of input images, thus the attention of network is restricted to local regions and the fused results are forced to contain more details. In addition, we propose an effective joint gradient loss to ensure the harmonious coexistence of contrast and texture, which prevents the background texture pollution caused by the edge diffusion of the high-contrast target regions. Extensive qualitative and quantitative experiments demonstrate that our GAN-FM has advantages over the state-of-the-art methods in preserving significant contrast and rich textures. Moreover, we apply the fused image generated by our method to object detection and image segmentation, which can effectively improve the performance.

Volume 7
Pages 1134-1147
DOI 10.1109/tci.2021.3119954
Language English
Journal IEEE Transactions on Computational Imaging

Full Text