Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Zabalza, Jaime and Ren, Jinchang and Zheng, Jiangbin and Zhao, Huimin and Qing, Chunmei and Yang, Zhijing and Du, Peijun and Marshall, Stephen (2016) Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing, 185. pp. 1-10. ISSN 0925-2312

[img]
Preview
Text (Zabalza-etal-Neurocomputing-2016-Novel-segmented-stacked-autoencoder-for-effective-dimensionality-reduction)
Zabalza_etal_Neurocomputing_2016_Novel_segmented_stacked_autoencoder_for_effective_dimensionality_reduction.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

    Abstract

    Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.