Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors

Ribeiro, Manuel Sam and Cleland, Joanne and Eshky, Aciel and Richmond, Korin and Renals, Steve (2021) Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors. Speech Communication, 128. pp. 24-34. ISSN 0167-6393 (https://doi.org/10.1016/j.specom.2021.02.001)

[thumbnail of Ribeiro-etal-SC-2021-Exploiting-ultrasound-tongue-imaging-for-the-automatic-detection]
Preview
Text. Filename: Ribeiro_etal_SC_2021_Exploiting_ultrasound_tongue_imaging_for_the_automatic_detection.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

Speech sound disorders are a common communication impairment in childhood. Because speech disorders can negatively affect the lives and the development of children, clinical intervention is often recommended. To help with diagnosis and treatment, clinicians use instrumented methods such as spectrograms or ultrasound tongue imaging to analyse speech articulations. Analysis with these methods can be laborious for clinicians, therefore there is growing interest in its automation. In this paper, we investigate the contribution of ultrasound tongue imaging for the automatic detection of speech articulation errors. Our systems are trained on typically developing child speech and augmented with a database of adult speech using audio and ultrasound. Evaluation on typically developing speech indicates that pre-training on adult speech and jointly using ultrasound and audio gives the best results with an accuracy of 86.9%. To evaluate on disordered speech, we collect pronunciation scores from experienced speech and language therapists, focusing on cases of velar fronting and gliding of /r/. The scores show good inter-annotator agreement for velar fronting, but not for gliding errors. For automatic velar fronting error detection, the best results are obtained when jointly using ultrasound and audio. The best system correctly detects 86.6% of the errors identified by experienced clinicians. Out of all the segments identified as errors by the best system, 73.2% match errors identified by clinicians. Results on automatic gliding detection are harder to interpret due to poor inter-annotator agreement, but appear promising. Overall findings suggest that automatic detection of speech articulation errors has potential to be integrated into ultrasound intervention software for automatically quantifying progress during speech therapy.

ORCID iDs

Ribeiro, Manuel Sam, Cleland, Joanne ORCID logoORCID: https://orcid.org/0000-0002-0660-1646, Eshky, Aciel, Richmond, Korin and Renals, Steve;