Multi-evidence and multi-modal fusion network for ground-based cloud recognition

Liu, Shuang and Li, Mei and Zhang, Zhong and Xiao, Baihua and Durrani, Tariq S. (2020) Multi-evidence and multi-modal fusion network for ground-based cloud recognition. Remote Sensing, 12 (3). 464. ISSN 2072-4292 (https://doi.org/10.3390/rs12030464)

[thumbnail of Liu-etal-RS-2020-Multi-evidence-and-multi-modal-fusion-network-for-ground-based-cloud-recognition]
Preview
Text. Filename: Liu_etal_RS_2020_Multi_evidence_and_multi_modal_fusion_network_for_ground_based_cloud_recognition.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

In recent times, deep neural networks have drawn much attention in ground-based cloud recognition. Yet such kind of approaches simply center upon learning global features from visual information, which causes incomplete representations for ground-based clouds. In this paper, we propose a novel method named multi-evidence and multi-modal fusion network (MMFN) for ground-based cloud recognition, which could learn extended cloud information by fusing heterogeneous features in a unified framework. Namely, MMFN exploits multiple pieces of evidence, i.e., global and local visual features, from ground-based cloud images using the main network and the attentive network. In the attentive network, local visual features are extracted from attentive maps which are obtained by refining salient patterns from convolutional activation maps. Meanwhile, the multi-modal network in MMFN learns multi-modal features for ground-based cloud. To fully fuse the multi-modal and multi-evidence visual features, we design two fusion layers in MMFN to incorporate multi-modal features with global and local visual features, respectively. Furthermore, we release the first multi-modal ground-based cloud dataset named MGCD which not only contains the ground-based cloud images but also contains the multi-modal information corresponding to each cloud image. The MMFN is evaluated on MGCD and achieves a classification accuracy of 88.63% comparative to the state-of-the-art methods, which validates its effectiveness for ground-based cloud recognition.