The organization of a neurocomputational control model for articulatory speech synthesis

Kroger, Bernd J. and Lowit, Anja and Schnitker, Ralph; Avouris, N. and Bourbakis, N. and Esposito, A. and Hatzilygeroudis, I., eds. (2008) The organization of a neurocomputational control model for articulatory speech synthesis. In: Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction. Lecture Notes in Computer Science, 5042 . Springer-Verlag, Berlin, pp. 121-135. ISBN 978-3-540-70871-1 (http://dx.doi.org/10.1007/978-3-540-70872-8_9)

Full text not available in this repository.Request a copy

Abstract

The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.