A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos

Wang, Zheng and Ren, Jinchang and Zhang, Dong and Sun, Meijun and Jiang, Jianmin (2018) A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos. Neurocomputing, 287. pp. 68-83. ISSN 0925-2312 (https://doi.org/10.1016/j.neucom.2018.01.076)

[thumbnail of Wang-etal-Neurocomputing-2018-A-deep-learning-based-feature-hybrid-framework-for-spatiotemporal]
Preview
Text. Filename: Wang_etal_Neurocomputing_2018_A_deep_learning_based_feature_hybrid_framework_for_spatiotemporal.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

Although research on detection of saliency and visual attention has been active over recent years, most of the existing work focuses on still image rather than video based saliency. In this paper, a deep learning based hybrid spatiotemporal saliency feature extraction framework is proposed for saliency detection from video footages. The deep learning model is used for the extraction of high-level features from raw video data, and they are then integrated with other high-level features. The deep learning network has been found extremely effective for extracting hidden features than that of conventional handcrafted methodology. The effectiveness for using hybrid high-level features for saliency detection in video is demonstrated in this work. Rather than using only one static image, the proposed deep learning model take several consecutive frames as input and both the spatial and temporal characteristics are considered when computing saliency maps. The efficacy of the proposed hybrid feature framework is evaluated by five databases with human gaze complex scenes. Experimental results show that the proposed model outperforms five other state-of-the-art video saliency detection approaches. In addition, the proposed framework is found useful for other video content based applications such as video highlights. As a result, a large movie clip dataset together with labeled video highlights is generated.