Dense convolutional networks for efficient video analysis

Jin, Tian and He, Zhihao and Basu, Amlan and Soraghan, John and Di Caterina, Gaetano and Petropoulakis, Lykourgos; (2019) Dense convolutional networks for efficient video analysis. In: 2019 The 5th International Conference on Control, Automation and Robotics (ICCAR 2019). IEEE, CHN, pp. 550-554. ISBN 9781728133263 (https://doi.org/10.1109/ICCAR.2019.8813408)

[thumbnail of Jin-etal-ICCAR-2019-Dense-convolutional-networks-for-efficient-video]
Preview
Text. Filename: Jin_etal_ICCAR_2019_Dense_convolutional_networks_for_efficient_video.pdf
Accepted Author Manuscript

Download (1MB)| Preview

Abstract

Over the past few years various Convolutional Neural Networks (CNNs) based models exhibited certain human-like performance in a range of image processing problems. Video understanding, action classification, gesture recognition has become a new stage for CNNs. The typical approach for video analysis is based on 2DCNN to extract feature map from a single frame and through 3DCNN or LSTM to merging spatiotemporal information, some approaches will add optical flow on the other branch and then post-hoc fusion. Normally the performance is proportional to the model complexity, as the accuracy keeps improving, the problem is also evolved from accuracy to model size, computing speed, model availability. In this paper, we present a lightweight network architecture framework to learn spatiotemporal feature from video. Our architecture tries to merge long-term content in any network feature map. Keeping the model as small and as fast as possible while maintaining accuracy. The accuracy achieved is 91.4% along with an appreciable speed of 69.3 fps.

ORCID iDs

Jin, Tian, He, Zhihao, Basu, Amlan ORCID logoORCID: https://orcid.org/0000-0002-0180-8090, Soraghan, John ORCID logoORCID: https://orcid.org/0000-0003-4418-7391, Di Caterina, Gaetano ORCID logoORCID: https://orcid.org/0000-0002-7256-0897 and Petropoulakis, Lykourgos ORCID logoORCID: https://orcid.org/0000-0003-3230-9670;