A deep convolutional generative adversarial networks (DCGANs)-based semi-supervised method for object recognition in synthetic aperture radar (SAR) images
Gao, Fei and Yang, Yue and Wang, Jun and Sun, Jinping and Yang, Erfu and Zhou, Huiyu (2018) A deep convolutional generative adversarial networks (DCGANs)-based semi-supervised method for object recognition in synthetic aperture radar (SAR) images. Remote Sensing, 10 (6). 846. ISSN 2072-4292 (https://doi.org/10.3390/rs10060846)
Preview |
Text.
Filename: Gao_etal_RS_2018_A_deep_convolutional_generative_adversarial_networks_based_semi_supervised_method_for_object_recognition.pdf
Final Published Version License: Download (3MB)| Preview |
Abstract
Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.
ORCID iDs
Gao, Fei, Yang, Yue, Wang, Jun, Sun, Jinping, Yang, Erfu ORCID: https://orcid.org/0000-0003-1813-5950 and Zhou, Huiyu;-
-
Item type: Article ID code: 64446 Dates: DateEvent29 May 2018Published25 May 2018AcceptedSubjects: Technology > Electrical engineering. Electronics Nuclear engineering
Technology > Engineering (General). Civil engineering (General) > Engineering designDepartment: Faculty of Engineering > Design, Manufacture and Engineering Management Depositing user: Pure Administrator Date deposited: 14 Jun 2018 08:29 Last modified: 11 Nov 2024 12:01 Related URLs: URI: https://strathprints.strath.ac.uk/id/eprint/64446