Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Zabalza, Jaime and Ren, Jinchang and Zheng, Jiangbin and Zhao, Huimin and Qing, Chunmei and Yang, Zhijing and Du, Peijun and Marshall, Stephen (2016) Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing, 185. pp. 1-10. ISSN 0925-2312 (https://doi.org/10.1016/j.neucom.2015.11.044)

[thumbnail of Zabalza-etal-Neurocomputing-2016-Novel-segmented-stacked-autoencoder-for-effective-dimensionality-reduction]
Preview
Text. Filename: Zabalza_etal_Neurocomputing_2016_Novel_segmented_stacked_autoencoder_for_effective_dimensionality_reduction.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.

ORCID iDs

Zabalza, Jaime ORCID logoORCID: https://orcid.org/0000-0002-0634-1725, Ren, Jinchang ORCID logoORCID: https://orcid.org/0000-0001-6116-3194, Zheng, Jiangbin, Zhao, Huimin, Qing, Chunmei, Yang, Zhijing, Du, Peijun and Marshall, Stephen ORCID logoORCID: https://orcid.org/0000-0001-7079-5628;