Picture of a black hole

Strathclyde Open Access research that creates ripples...

The Strathprints institutional repository is a digital archive of University of Strathclyde's Open Access research outputs. Strathprints provides access to thousands of research papers by University of Strathclyde researchers, including by Strathclyde physicists involved in observing gravitational waves and black hole mergers as part of the Laser Interferometer Gravitational-Wave Observatory (LIGO) - but also other internationally significant research from the Department of Physics. Discover why Strathclyde's physics research is making ripples...

Strathprints also exposes world leading research from the Faculties of Science, Engineering, Humanities & Social Sciences, and from the Strathclyde Business School.

Discover more...

Depth estimation and implementation on the DM6437 for panning surveillance cameras

Asif, M. and Soraghan, J.J. (2009) Depth estimation and implementation on the DM6437 for panning surveillance cameras. In: 16th International Conference on Digital Signal Processing, 2009-07-05 - 2009-07-07.

Full text not available in this repository. (Request a copy from the Strathclyde author)

Abstract

Real-time efficient video analytics (VA) for surveillance system requires 'in-camera' or 'at camera edge' decision-making. Video Analytics is required to work on good resolution footage in order to register tiny but important information that may otherwise be lost during subsequent compression process, especially in temporally exploited streams. This paper presents a novel approach to estimate 3D location coordinates of selected control points from panning camera footage. The two frames at different panning angles are considered to be two images captured from two different coplanar viewpoints with some translational distance between their optical centers. To simulate panned images as coplanar the information required is the panning angle. With the help of the panning angle transformational matrix two image planes are calculated. The approach developed here will help extract motion descriptors, including and not limited to, motion activity and direction of motion activity. The performance of the algorithm is evaluated in the paper using several test sequences. This paper provides a comprehensive implementation issues onto TI DaVinci platform.