Sleep apnea detection via depth video and audio feature learning

Yang, Cheng and Cheung, Gene and Stankovic, Vladimir and Chan, Kevin and Ono, Nobutaka (2017) Sleep apnea detection via depth video and audio feature learning. IEEE Transactions on Multimedia, 19 (4). pp. 822-835. ISSN 1520-9210 (https://doi.org/10.1109/TMM.2016.2626969)

[thumbnail of Yang-etal-TM-2016-sleep-apnea-detection-via-depth-video]
Preview
Text. Filename: Yang_etal_TM_2016_sleep_apnea_detection_via_depth_video.pdf
Accepted Author Manuscript

Download (1MB)| Preview

Abstract

Obstructive sleep apnea, characterized by repetitive obstruction in the upper airway during sleep, is a common sleep disorder that could significantly compromise sleep quality and quality of life in general. The obstructive respiratory events can be detected by attended in-laboratory or unattended ambulatory sleep studies. Such studies require many attachments to a patient’s body to track respiratory and physiological changes, which can be uncomfortable and compromise the patient’s sleep quality. In this paper, we propose to record depth video and audio of a patient using a Microsoft Kinect camera during his/her sleep, and extract relevant features to correlate with obstructive respiratory events scored manually by a scientific officer based on data collected by Philips system Alice6 LDxS that is commonly used in sleep clinics. Specifically, we first propose an alternating-frame video recording scheme, where different 8 of the 11 available bits in captured depth images are extracted at different instants for H.264 video encoding. At the decoder, the uncoded 3 bits in each frame can be recovered via block-based search. Next, we perform temporal denoising on the decoded depth video using a motion vector graph smoothness prior, so that undesirable flickering can be removed without blurring sharp edges. Given the denoised depth video, we track a patient’s chest and abdominal movements based on a dual-ellipse model. Finally, we extract ellipse model features via a wavelet packet transform (WPT), extract audio features via non-negative matrix factorization (NMF), and insert them as input to a classifier to detect respiratory events. Experimental results show first that our depth video compression scheme outperforms a competitor that records only the 8 most significant bits. Second, we show that our graph-based temporal denoising scheme reduces the flickering effect without over-smoothing. Third, we show that using our extracted depth video and audio features, our trained classifiers can deduce respiratory events scored manually based on data collected by system Alice6 LDxS with high accuracy.