Perception understanding action : adding understanding to the perception action cycle with spiking segmentation

Kirkland, Paul and Di Caterina, Gaetano and Soraghan, John and Matich, George (2020) Perception understanding action : adding understanding to the perception action cycle with spiking segmentation. Frontiers in Neurorobotics, 14. 568319. ISSN 1662-5218 (https://doi.org/10.3389/fnbot.2020.568319)

[thumbnail of Kirkland-etal-FN-2020-adding-understanding-to-the-perception-action-cycle-with-spiking-segmentation]
Preview
Text. Filename: Kirkland_etal_FN_2020_adding_understanding_to_the_perception_action_cycle_with_spiking_segmentation.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (7MB)| Preview

Abstract

Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.

ORCID iDs

Kirkland, Paul ORCID logoORCID: https://orcid.org/0000-0001-5905-6816, Di Caterina, Gaetano ORCID logoORCID: https://orcid.org/0000-0002-7256-0897, Soraghan, John ORCID logoORCID: https://orcid.org/0000-0003-4418-7391 and Matich, George;