Image fusion based on generative adversarial network consistent with perception

Fu, Yu and Wu, Xiao-Jun and Durrani, Tariq (2021) Image fusion based on generative adversarial network consistent with perception. Information Fusion, 72. pp. 110-125. ISSN 1566-2535 (https://doi.org/10.1016/j.inffus.2021.02.019)

[thumbnail of Fu-etal-IF-2021-Image-fusion-based-on-generative-adversarial-network-consistent-with-perception]
Preview
Text. Filename: Fu_etal_IF_2021_Image_fusion_based_on_generative_adversarial_network_consistent_with_perception.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (10MB)| Preview

Abstract

Deep learning is a rapidly developing approach in the field of infrared and visible image fusion. In this context, the use of dense blocks in deep networks significantly improves the utilization of shallow information, and the combination of the Generative Adversarial Network (GAN) also improves the fusion performance of two source images. We propose a new method based on dense blocks and GANs, and we directly insert the input image-visible light image in each layer of the entire network. We use structural similarity and gradient loss functions that are more consistent with perception instead of mean square error loss. After the adversarial training between the generator and the discriminator, we show that a trained end-to-end fusion network – the generator network – is finally obtained. Our experiments show that the fused images obtained by our approach achieve good score based on multiple evaluation indicators. Further, our fused images have better visual effects in multiple sets of contrasts, which are more satisfying to human visual perception.