Picture of smart phone in human hand

World leading smartphone and mobile technology research at Strathclyde...

The Strathprints institutional repository is a digital archive of University of Strathclyde's Open Access research outputs. Strathprints provides access to thousands of Open Access research papers by University of Strathclyde researchers, including by Strathclyde researchers from the Department of Computer & Information Sciences involved in researching exciting new applications for mobile and smartphone technology. But the transformative application of mobile technologies is also the focus of research within disciplines as diverse as Electronic & Electrical Engineering, Marketing, Human Resource Management and Biomedical Enginering, among others.

Explore Strathclyde's Open Access research on smartphone technology now...

Ranking highlight level of movie clips : a template based adaptive kernel SVM method

Wang, Zheng and Ren, Gaojun and Sun, Meijun and Ren, Jinchang and Jin, Jesse S. (2015) Ranking highlight level of movie clips : a template based adaptive kernel SVM method. Journal of Visual Languages and Computing, 27. pp. 49-59. ISSN 1045-926X

[img]
Preview
Text (Wang-etal-JVLC-2015-Ranking-highlight-level-of-movie-clips-a-template-based-adaptive)
Wang_etal_JVLC_2015_Ranking_highlight_level_of_movie_clips_a_template_based_adaptive.pdf - Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB) | Preview

Abstract

This paper looks into a new direction in movie clips analysis – model based ranking of highlight level. A movie clip, containing a short story, is composed of several continuous shots, which is much simpler than the whole movie. As a result, clip based analysis provides a feasible way for movie analysis and interpretation. In this paper, clip-based ranking of highlight level is proposed, where the challenging problem in detecting and recognizing events within clips is not required. Due to the lack of publicly available datasets, we firstly construct a database of movie clips, where each clip is associated with manually derived highlight level as ground truth. From each clip a number of effective visual cues are then extracted. To bridge the gap between low-level features and highlight level semantics, a holistic method of highlight ranking model is introduced. According to the distance between testing clips and selected templates, appropriate kernel function of support vector machine (SVM) is adaptively selected. Promising results are reported in automatic ranking of movie highlight levels.