Picture of virus under microscope

Research under the microscope...

The Strathprints institutional repository is a digital archive of University of Strathclyde research outputs.

Strathprints serves world leading Open Access research by the University of Strathclyde, including research by the Strathclyde Institute of Pharmacy and Biomedical Sciences (SIPBS), where research centres such as the Industrial Biotechnology Innovation Centre (IBioIC), the Cancer Research UK Formulation Unit, SeaBioTech and the Centre for Biophotonics are based.

Explore SIPBS research

Objective grading of facial paralysis using local binary patterns in video processing

He, Shu and Soraghan, J.J. and O'Reilly, Brian F. (2008) Objective grading of facial paralysis using local binary patterns in video processing. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008. IEEE, pp. 4805-4808. ISBN 978-1-4244-1814-5

Full text not available in this repository. (Request a copy from the Strathclyde author)

Abstract

This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.