Dense convolutional networks for efficient video analysis

Jin, Tian and He, Zhihao and Basu, Amlan and Soraghan, John and Di Caterina, Gaetano and Petropoulakis, Lykourgos; (2019) Dense convolutional networks for efficient video analysis. In: 2019 The 5th International Conference on Control, Automation and Robotics (ICCAR 2019). IEEE, CHN. (In Press)

[thumbnail of Jin-etal-ICCAR-2019-Dense-convolutional-networks-for-efficient-video]
Text (Jin-etal-ICCAR-2019-Dense-convolutional-networks-for-efficient-video)
Accepted Author Manuscript

Download (1MB)| Preview


    Over the past few years various Convolutional Neural Networks (CNNs) based models exhibited certain human-like performance in a range of image processing problems. Video understanding, action classification, gesture recognition has become a new stage for CNNs. The typical approach for video analysis is based on 2DCNN to extract feature map from a single frame and through 3DCNN or LSTM to merging spatiotemporal information, some approaches will add optical flow on the other branch and then post-hoc fusion. Normally the performance is proportional to the model complexity, as the accuracy keeps improving, the problem is also evolved from accuracy to model size, computing speed, model availability. In this paper, we present a lightweight network architecture framework to learn spatiotemporal feature from video. Our architecture tries to merge long-term content in any network feature map. Keeping the model as small and as fast as possible while maintaining accuracy. The accuracy achieved is 91.4% along with an appreciable speed of 69.3 fps.

    ORCID iDs

    Jin, Tian, He, Zhihao, Basu, Amlan ORCID logoORCID:, Soraghan, John ORCID logoORCID:, Di Caterina, Gaetano ORCID logoORCID: and Petropoulakis, Lykourgos ORCID logoORCID:;