Visualising speech : using ultrasound visual biofeedback to diagnose and treat speech disorders in children with cleft lip and palate

Cleland, Joanne and Crampin, Lisa and Wrench, A.A. and Zharkova, Natalia and Lloyd, Susan (2017) Visualising speech : using ultrasound visual biofeedback to diagnose and treat speech disorders in children with cleft lip and palate. In: Royal College of Speech and Language Therapists Conference 2017, 2017-09-27 - 2017-09-28.

Full text not available in this repository.Request a copy

Abstract

BACKGROUND: Children withcleft lip and palate (CLP) often continue to have problems producing clearspeech long after the clefts have been surgically repaired, leading toeducational and social disadvantage. Speech is of key importance in CLP fromboth a quality of life and surgical outcome perspective, yet assessment relieson subjective perceptual methods, with speech and language therapists (SLTs)listening to speech and transcribing errors. This is problematic becauseperception-based phonetic transcription is well known to be highly unreliable(Howard & Lohmander, 2011) especially in CLP, where the range of errortypes is arguably far greater than for other speech sound disorders. Moreover,CLP speech is known to be vulnerable to imperceptible error types, such asdouble articulations which can only be understood with instrumental techniquessuch as ultrasound tongue imaging (UTI). Incorrect transcription of theseerrors can result in misdiagnosis and subsequent inappropriate interventionwhich can lead to speech errors becoming deeply ingrained.Aims: This study willdevelop a technical solution for improving diagnosis of speech disorders inchildren with CLP. Participants: 40 children with CLP, aged 3 to 15.Methods: Wewill use UTI to both qualitatively and quantitatively identify errors in thespeech of children with CLP. Data will consist of materials from the CLEFTNETprotocol: spontaneous counting, 10 repetitions of all consonants in /aCa/,sentences from GOS.SP.ASS. 98 (Sell, Harding & Grunwell, 1998) and 5minimal sets contrasting common substitutions (e.g. “a ship, a sip, a chip”).Ultrasound data will be collected using a Sonospeech high-speed cineloop systemat 80fps over a 150 degree field of view. The ultrasound probe will be placedunder the chin using a stabilising head set. Analysis: Consonants will beannotated using Articulate Assistant Advanced (AAA) software (ArticulateInstruments, 2012), after which we will systematically analyse the data toidentify each of Gibbon’s eight error types (Gibbon, 2004) using measures byZharkova (2013, 2016) and Dawson, Tiede and Whalen (2016)Conclusions: Datacollection is ongoing. This poster will present the protocol for the study andsome preliminary data demonstrating cleft-type speech characteristics which canbe identified using ultrasound.