Keys to accurate feature extraction using residual spiking neural networks

Vicente-Sola, Alex and Manna, Davide L and Kirkland, Paul and Di Caterina, Gaetano and Bihl, Trevor (2022) Keys to accurate feature extraction using residual spiking neural networks. Neuromorphic Computing and Engineering, 2 (4). 044001. ISSN 2634-4386 (

[thumbnail of Vicente-Sola-etal-NCE-2022-Keys-to-accurate-feature-extraction-using-residual-spiking-neural-networks]
Text. Filename: Vicente_Sola_etal_NCE_2022_Keys_to_accurate_feature_extraction_using_residual_spiking_neural_networks.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (2MB)| Preview


Spiking neural networks (SNNs) have become an interesting alternative to conventional artificial neural networks (ANN) thanks to their temporal processing capabilities and energy efficient implementations in neuromorphic hardware. However, the challenges involved in training SNNs have limited their performance in terms of accuracy and thus their applications. Improving learning algorithms and neural architectures for a more accurate feature extraction is therefore one of the current priorities in SNN research. In this paper we present a study on the key components of modern spiking architectures. We design a spiking version of the successful residual network architecture and provide an in-depth study on the possible implementations of spiking residual connections. This study shows how, depending on the use case, the optimal residual connection implementation may vary. Additionally, we empirically compare different techniques in image classification datasets taken from the best performing networks. Our results provide a state of the art guide to SNN design, which allows to make informed choices when trying to build the optimal visual feature extractor. Finally, our network outperforms previous SNN architectures in CIFAR-10 (94.14%) and CIFAR-100 (74.65%) datasets and matches the state of the art in DVS-CIFAR10 (72.98%), with less parameters than the previous state of the art and without the need for ANN-SNN conversion.