Building extraction from high-resolution aerial imagery using a generative adversarial network with spatial and channel attention mechanisms

Pan, Xuran and Yang, Fan and Gao, Lianru and Chen, Zhengchao and Zhang, Bing and Fan, Hairui and Ren, Jinchang (2019) Building extraction from high-resolution aerial imagery using a generative adversarial network with spatial and channel attention mechanisms. Remote Sensing, 11 (8). 917. ISSN 2072-4292 (https://doi.org/10.3390/rs11080966)

[thumbnail of Pan-etal-RS-2019-Building-extraction-from-high-resolution-aerial-imagery-using-a-generative-adversarial-network]
Preview
Text. Filename: Pan_etal_RS_2019_Building_extraction_from_high_resolution_aerial_imagery_using_a_generative_adversarial_network.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (7MB)| Preview

Abstract

Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention mechanisms (GAN-SCA) for the robust segmentation of buildings in remote sensing images. The segmentation network (generator) of the proposed framework is composed of the well-known semantic segmentation architecture (U-Net) and the spatial and channel attention mechanisms (SCA). The adoption of SCA enables the segmentation network to selectively enhance more useful features in specific positions and channels and enables improved results closer to the ground truth. The discriminator is an adversarial network with channel attention mechanisms that can properly discriminate the outputs of the generator and the ground truth maps. The segmentation network and adversarial network are trained in an alternating fashion on the Inria aerial image labeling dataset and Massachusetts buildings dataset. Experimental results show that the proposed GAN-SCA achieves a higher score (the overall accuracy and intersection over the union of Inria aerial image labeling dataset are 96.61% and 77.75%, respectively, and the F 1 -measure of the Massachusetts buildings dataset is 96.36%) and outperforms several state-of-the-art approaches.

ORCID iDs

Pan, Xuran, Yang, Fan, Gao, Lianru, Chen, Zhengchao, Zhang, Bing, Fan, Hairui and Ren, Jinchang ORCID logoORCID: https://orcid.org/0000-0001-6116-3194;