Automatic articulatory resynthesis from EMA data
Speaker: Ingmar Steiner
Institution: Saarland University & University of Edinburgh
Abstract:
One step towards data-driven high-level control of an articulatory synthesizer for TTS applications is the resynthesis of a corpus of electromagnetic articulographic (EMA) data. By aligning the articulatory gestures (which are transformed into control point trajectories in the synthesizer's vocal tract model) in such a way that the original motion-captured speech is closely matched, we receive a training set suitable for HMM-based synthesis of control trajectories for unseen utterances. This talk presents intermediate results in automatic EMA-based articulatory resynthesis and outlines pending acquisition of new articulatory data to configure the vocal tract model to a new speaker.