Visualising speech : identification of atypical tongue-shape patterns in the speech of children with cleft lip and palate using ultrasound technology

Lloyd, Susan and Cleland, Joanne and Crampin, Lisa and Campbell, Linsay and Zharkova, Natalia and Palo, Juha-Pertti (2018) Visualising speech : identification of atypical tongue-shape patterns in the speech of children with cleft lip and palate using ultrasound technology. In: Craniofacial Society of Great Britain and Ireland, 2018-04-19 - 2018-04-20.

[img]
Preview
Text (Lloyd-etal-Craniofacial-2018-Visualising-speech-identification-of-atypical-tongue-shape)
Lloyd_etal_Craniofacial_2018_Visualising_speech_identification_of_atypical_tongue_shape.pdf
Accepted Author Manuscript

Download (762kB)| Preview

    Abstract

    Previous research by Gibbon (2004) shows that at least 8 distinct error types can be identified in the speech of people with cleft lip and palate (CLP) using electropalatography (EPG), a technique which measures tongue-palate contact. However, EPG is expensive and logistically difficult. In contrast, ultrasound is cheaper and arguably better equipped to image the posterior articulations (such as pharyngeals) which are common in CLP. A key aim of this project is to determine whether the eight error types made visible with EPG in CLP speech described by Gibbon (2004) can be also be identified with ultrasound. This paper will present the first results from a larger study developing a qualitative and quantitative ultrasound speech assessment protocol. Data from the first 20 children aged 3 to 18 with CLP will be presented. Data are spoken materials from the CLEFTNET protocol. We will present a recording format compatible with CAPS-A to record initial observations from the live ultrasound (e.g. double articulations, pharyngeal stops). Two Speech and Language Therapists analysed the data independently to identify error types. Results suggest that all of the error types, for example fronted placement and double articulations can be identified using ultrasound, but this is challenging in real-time. Ongoing work involves quantitative analysis of error types using articulatory measures.