Automatic audiovisual synchronisation for ultrasound tongue imaging

Eshky, Aciel and Cleland, Joanne and Ribeiro, Manuel Sam and Sugden, Eleanor and Richmond, Korin and Renals, Steve (2021) Automatic audiovisual synchronisation for ultrasound tongue imaging. Speech Communication, 132. pp. 83-95. ISSN 0167-6393 (https://doi.org/10.1016/j.specom.2021.05.008)

[thumbnail of Eshky-etal-SC-2021-Automatic-audiovisual-synchronisation-for-ultrasound-tongue-imaging]
Preview
Text. Filename: Eshky_etal_SC_2021_Automatic_audiovisual_synchronisation_for_ultrasound_tongue_imaging.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

Ultrasound tongue imaging is used to visualise the intra-oral articulators during speech production. It is utilised in a range of applications, including speech and language therapy and phonetics research. Ultrasound and speech audio are recorded simultaneously, and in order to correctly use this data, the two modalities should be correctly synchronised. Synchronisation is achieved using specialised hardware at recording time, but this approach can fail in practice resulting in data of limited usability. In this paper, we address the problem of automatically synchronising ultrasound and audio after data collection. We first investigate the tolerance of expert ultrasound users to synchronisation errors in order to find the thresholds for error detection. We use these thresholds to define accuracy scoring boundaries for evaluating our system. We then describe our approach for automatic synchronisation, which is driven by a self-supervised neural network, exploiting the correlation between the two signals to synchronise them. We train our model on data from multiple domains with different speaker characteristics, different equipment, and different recording environments, and achieve an accuracy >92.4% on held-out in-domain data. Finally, we introduce a novel resource, the Cleft dataset, which we gathered with a new clinical subgroup and for which hardware synchronisation proved unreliable. We apply our model to this out-of-domain data, and evaluate its performance subjectively with expert users. Results show that users prefer our model's output over the original hardware output 79.3% of the time. Our results demonstrate the strength of our approach and its ability to generalise to data from new domains.

ORCID iDs

Eshky, Aciel, Cleland, Joanne ORCID logoORCID: https://orcid.org/0000-0002-0660-1646, Ribeiro, Manuel Sam, Sugden, Eleanor ORCID logoORCID: https://orcid.org/0000-0001-5722-3035, Richmond, Korin and Renals, Steve;