Lip2Speech : lightweight multi-speaker speech reconstruction with Gabor features
Dong, Zhongping and Xu, Yan and Abel, Andrew and Wang, Dong (2024) Lip2Speech : lightweight multi-speaker speech reconstruction with Gabor features. Applied Sciences, 14 (2). 798. ISSN 2076-3417 (https://doi.org/10.3390/app14020798)
Preview |
Text.
Filename: Dong-AS-2023-Lip2Speech-lightweight-multi-speaker-speech-reconstruction.pdf
Final Published Version License: Download (4MB)| Preview |
Abstract
In environments characterised by noise or the absence of audio signals, visual cues, notably facial and lip movements, serve as valuable substitutes for missing or corrupted speech signals. In these scenarios, speech reconstruction can potentially generate speech from visual data. Recent advancements in this domain have predominantly relied on end-to-end deep learning models, like Convolutional Neural Networks (CNN) or Generative Adversarial Networks (GAN). However, these models are encumbered by their intricate and opaque architectures, coupled with their lack of speaker independence. Consequently, achieving multi-speaker speech reconstruction without supplementary information is challenging. This research introduces an innovative Gabor-based speech reconstruction system tailored for lightweight and efficient multi-speaker speech restoration. Using our Gabor feature extraction technique, we propose two novel models: GaborCNN2Speech and GaborFea2Speech. These models employ a rapid Gabor feature extraction method to derive lowdimensional mouth region features, encompassing filtered Gabor mouth images and low-dimensional Gabor features as visual inputs. An encoded spectrogram serves as the audio target, and a Long Short-Term Memory (LSTM)-based model is harnessed to generate coherent speech output. Through comprehensive experiments conducted on the GRID corpus, our proposed Gabor-based models have showcased superior performance in sentence and vocabulary reconstruction when compared to traditional end-to-end CNN models. These models stand out for their lightweight design and rapid processing capabilities. Notably, the GaborFea2Speech model presented in this study achieves robust multi-speaker speech reconstruction without necessitating supplementary information, thereby marking a significant milestone in the field of speech reconstruction.
ORCID iDs
Dong, Zhongping, Xu, Yan, Abel, Andrew ORCID: https://orcid.org/0000-0002-3631-8753 and Wang, Dong;-
-
Item type: Article ID code: 88003 Dates: DateEvent17 January 2024Published7 December 2023AcceptedSubjects: Science > Mathematics > Electronic computers. Computer science Department: Faculty of Science > Computer and Information Sciences Depositing user: Pure Administrator Date deposited: 30 Jan 2024 16:06 Last modified: 11 Nov 2024 14:11 URI: https://strathprints.strath.ac.uk/id/eprint/88003