Sleep apnea detection via depth video and audio feature learning
Yang, Cheng and Cheung, Gene and Stankovic, Vladimir and Chan, Kevin and Ono, Nobutaka (2017) Sleep apnea detection via depth video and audio feature learning. IEEE Transactions on Multimedia, 19 (4). pp. 822-835. ISSN 1520-9210 (https://doi.org/10.1109/TMM.2016.2626969)
Preview |
Text.
Filename: Yang_etal_TM_2016_sleep_apnea_detection_via_depth_video.pdf
Accepted Author Manuscript Download (1MB)| Preview |
Abstract
Obstructive sleep apnea, characterized by repetitive obstruction in the upper airway during sleep, is a common sleep disorder that could significantly compromise sleep quality and quality of life in general. The obstructive respiratory events can be detected by attended in-laboratory or unattended ambulatory sleep studies. Such studies require many attachments to a patient’s body to track respiratory and physiological changes, which can be uncomfortable and compromise the patient’s sleep quality. In this paper, we propose to record depth video and audio of a patient using a Microsoft Kinect camera during his/her sleep, and extract relevant features to correlate with obstructive respiratory events scored manually by a scientific officer based on data collected by Philips system Alice6 LDxS that is commonly used in sleep clinics. Specifically, we first propose an alternating-frame video recording scheme, where different 8 of the 11 available bits in captured depth images are extracted at different instants for H.264 video encoding. At the decoder, the uncoded 3 bits in each frame can be recovered via block-based search. Next, we perform temporal denoising on the decoded depth video using a motion vector graph smoothness prior, so that undesirable flickering can be removed without blurring sharp edges. Given the denoised depth video, we track a patient’s chest and abdominal movements based on a dual-ellipse model. Finally, we extract ellipse model features via a wavelet packet transform (WPT), extract audio features via non-negative matrix factorization (NMF), and insert them as input to a classifier to detect respiratory events. Experimental results show first that our depth video compression scheme outperforms a competitor that records only the 8 most significant bits. Second, we show that our graph-based temporal denoising scheme reduces the flickering effect without over-smoothing. Third, we show that using our extracted depth video and audio features, our trained classifiers can deduce respiratory events scored manually based on data collected by system Alice6 LDxS with high accuracy.
ORCID iDs
Yang, Cheng ORCID: https://orcid.org/0000-0002-3540-1598, Cheung, Gene, Stankovic, Vladimir ORCID: https://orcid.org/0000-0002-1075-2420, Chan, Kevin and Ono, Nobutaka;-
-
Item type: Article ID code: 58223 Dates: DateEvent1 April 2017Published9 November 2016Published Online22 October 2016AcceptedNotes: (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Subjects: Technology > Electrical engineering. Electronics Nuclear engineering Department: Faculty of Engineering > Electronic and Electrical Engineering Depositing user: Pure Administrator Date deposited: 24 Oct 2016 15:23 Last modified: 17 Nov 2024 09:45 Related URLs: URI: https://strathprints.strath.ac.uk/id/eprint/58223