Picture of wind turbine against blue sky

Open Access research with a real impact...

The Strathprints institutional repository is a digital archive of University of Strathclyde research outputs.

The Energy Systems Research Unit (ESRU) within Strathclyde's Department of Mechanical and Aerospace Engineering is producing Open Access research that can help society deploy and optimise renewable energy systems, such as wind turbine technology.

Explore wind turbine research in Strathprints

Explore all of Strathclyde's Open Access research content

Quantitative analysis of facial paralysis using local binary patterns in biomedical videos

He, Shu and Soraghan, J.J. and O'Reilly, Brian and Xing, D. (2009) Quantitative analysis of facial paralysis using local binary patterns in biomedical videos. IEEE Transactions on Biomedical Engineering, 56 (7). pp. 1864-1870. ISSN 0018-9294

Full text not available in this repository. (Request a copy from the Strathclyde author)

Abstract

Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.