IEEE Transactions on Instrumentation and Measurement | 2021

UFA-FUSE: A Novel Deep Supervised and Hybrid Model for Multifocus Image Fusion

 
 
 
 
 

Abstract


Traditional and deep learning-based fusion methods generated the intermediate decision map to obtain the fusion image through a series of postprocessing procedures. However, the fusion results generated by these methods are easy to lose some source image details or results in artifacts. Inspired by the image reconstruction techniques based on deep learning, we propose a multifocus image fusion network framework without any postprocessing to solve these problems in the end-to-end and supervised learning ways. To sufficiently train the fusion model, we have generated a large-scale multifocus image data set with ground-truth fusion images. What is more, to obtain a more informative fusion image, we further designed a novel fusion strategy based on unity fusion attention, which is composed of a channel attention module and a spatial attention module. Specifically, the proposed fusion approach mainly comprises three key components: feature extraction, feature fusion, and image reconstruction. We first utilize seven convolutional blocks to extract the image features from source images. Then, the extracted convolutional features are fused by the proposed fusion strategy in the feature fusion layer. Finally, the fused image features are reconstructed by four convolutional blocks. Experimental results demonstrate that the proposed approach for multifocus image fusion achieves remarkable fusion performance and superior time efficiency compared to 19 state-of-the-art fusion methods.

Volume 70
Pages 1-17
DOI 10.1109/TIM.2021.3072124
Language English
Journal IEEE Transactions on Instrumentation and Measurement

Full Text