Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos
Yan, Yijun and Ren, Jinchang and Zhao, Huimin and Sun, Genyun and Wang, Zheng and Zheng, Jiangbin and Marshall, Stephen and Soraghan, John (2017) Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cognitive Computation. ISSN 1866-9964 (https://doi.org/10.1007/s12559-017-9529-6)
Preview |
Text.
Filename: Yan_etal_CC_2017_Cognitive_fusion_of_thermal_and_visible_imagery_for_effective_detection.pdf
Final Published Version License: Download (2MB)| Preview |
Abstract
BACKGROUND INTRODUCTION In this paper, we present an efficient framework to cognitively detect and track salient objects from videos. In general, colored visible image in red-green-blue (RGB) has better distinguishability in human visual perception, yet it suffers from the effect of illumination noise and shadows. On the contrary, the thermal image is less sensitive to these noise effects though its distinguishability varies according to environmental settings. To this end, cognitive fusion of these two modalities provides an effective solution to tackle this problem. METHODS First, a background model is extracted followed by two stage background-subtraction for foreground detection in visible and thermal images. To deal with cases of occlusion or overlap, knowledge based forward tracking and backward tracking are employed to identify separate objects even the foreground detection fails. RESULTS To evaluate the proposed method, a publicly available color-thermal benchmark dataset OTCBVS is employed here. For our foreground detection evaluation, objective and subjective analysis against several state-of-the-art methods have been done on our manually segmented ground truth. For our object tracking evaluation, comprehensive qualitative experiments have also been done on all video sequences. CONCLUSIONS Promising results have shown that the proposed fusion based approach can successfully detect and track multiple human objects in most scenes regardless of any light change or occlusion problem.
ORCID iDs
Yan, Yijun ORCID: https://orcid.org/0000-0003-0224-0078, Ren, Jinchang ORCID: https://orcid.org/0000-0001-6116-3194, Zhao, Huimin, Sun, Genyun, Wang, Zheng, Zheng, Jiangbin, Marshall, Stephen ORCID: https://orcid.org/0000-0001-7079-5628 and Soraghan, John ORCID: https://orcid.org/0000-0003-4418-7391;-
-
Item type: Article ID code: 62462 Dates: DateEvent4 December 2017Published4 December 2017Published Online29 October 2017AcceptedSubjects: Science > Mathematics > Electronic computers. Computer science Department: Faculty of Engineering > Electronic and Electrical Engineering
Technology and Innovation Centre > Sensors and Asset Management
Strategic Research Themes > Measurement Science and Enabling TechnologiesDepositing user: Pure Administrator Date deposited: 29 Nov 2017 13:45 Last modified: 11 Nov 2024 11:49 Related URLs: URI: https://strathprints.strath.ac.uk/id/eprint/62462