• Keine Ergebnisse gefunden

Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

N/A
N/A
Protected

Academic year: 2022

Aktie "Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

Ingmar Steiner Korin Richmond Slim Ouni University College Dublin University of Edinburgh Universit´e de Lorraine

& Trinity College Dublin LORIA, UMR 7503

1 Introduction

The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Never- theless, while state-of-the-art, data-driven approaches to facial an- imation can make use of sophisticated motion capture techniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Us- ing appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.

2 Articulatory animation

To complement a purely data-driven AV synthesizer employing bi- modal unit-selection [Musti et al. 2011], we have implemented a framework for articulatory animation [Steiner and Ouni 2012] us- ing motion capture of the hidden articulators obtained through elec- tromagnetic articulography (EMA) [Hoole and Zierdt 2010]. One component of this framework compiles an animated 3D model of the tongue and teeth as an asset usable by downstream components or an external 3D graphics engine. This is achieved by rigging static meshes with a pseudo-skeletal armature, which is in turn driven by the EMA data through inverse kinematics (IK). Subjectively, we find the resulting animation to be both plausible and convincing.

However, this has not yet been formally evaluated, and so the moti- vation for the present paper is to conduct an objective analysis.

3 Multimodal speech production data

Themngu0articulatory corpus1contains a large set of 3D EMA data [Richmond et al. 2011] from a male speaker of British English, as well as volumetric magnetic resonance imaging (MRI) scans of that speaker’s vocal tract during sustained speech production [Steiner et al. 2012]. Using the articulatory animation framework, static meshes of dental cast scans and the tongue (extracted from the MRI subset of themngu0corpus) can be animated using motion capture data from the EMA subset, providing a means to evaluate the synthesized animation on the generated model (Figure 1).

4 Evaluation

In order to analyze the degree to which the animated articulators match the shape and movements captured by the natural speech pro- duction data, several approaches are described.

• The positions and orientations of the IK targets are dumped to data files in a format compatible with that of the 3D articulo- graph. This allows visualization and direct comparison of the

ingmar.steiner@ucd.ie

korin@cstr.ed.ac.uk

slim.ouni@loria.fr

1freely available for research purposes fromhttp://mngu0.org/

Figure 1:Animated articulatory model in bind pose, with and with- out maxilla; EMA coils rendered as spheres.

animation with the original EMA data, using external analysis software.

• The distances of the EMA-controlled IK targets to the sur- faces of the animated articulators should ideally remain close to zero during deformation. Likewise, there should be colli- sion with a reconstructed palate surface, but no penetration.

• A tongue mesh extracted from a volumetric MRI scan in the mngu0 data, when deformed to a pose corresponding to a given phoneme, should assume a shape closely resembling the vocal tract configuration in the corresponding volumetric scan.

These evaluation approaches are implemented as unit and integra- tion tests in the corresponding phases of the model compiler’s build lifecycle, automatically producing appropriate reports by which the naturalness of the articulatory animation may be assessed.

References

HOOLE, P.,ANDZIERDT, A. 2010. Five-dimensional articulogra- phy. InSpeech Motor Control: New developments in basic and applied research, B. Maassen and P. van Lieshout, Eds. Oxford University Press, 331–349.

MUSTI, U., COLOTTE, V., TOUTIOS, A., ANDOUNI, S. 2011.

Introducing visual target cost within an acoustic-visual unit- selection speech synthesizer. InProc. 10th International Con- ference on Auditory-Visual Speech Processing (AVSP), 49–55.

RICHMOND, K., HOOLE, P.,ANDKING, S. 2011. Announcing the electromagnetic articulography (day 1) subset of the mngu0 articulatory corpus. InProc. Interspeech, 1505–1508.

STEINER, I.,ANDOUNI, S. 2012. Artimate: an articulatory an- imation framework for audiovisual speech synthesis. InProc.

ISCA Workshop on Innovation and Applications in Speech Tech- nology.

STEINER, I., RICHMOND, K., MARSHALL, I.,ANDGRAY, C. D.

2012. The magnetic resonance imaging subset of the mngu0 articulatory corpus.Journal of the Acoustical Society of America 131, 2 (Feb.), 106–111.

Referenzen

ÄHNLICHE DOKUMENTE

The procedure we use in MARY TTS to create expres- sive voices, using an expressive or emotion label, is general enough to be used with explicitly recorded data in expressive

In this paper, EMA data is processed from a motion-capture perspective and applied to the visualization of an existing mul- timodal corpus of articulatory data, creating a kinematic

We present an approach to processing EMA data from a motion-capture perspective and applying it to the visualization of an existing multimodal corpus of articulatory data,a creating

Adapting a skeletal animation approach, the articulatory motion data is applied to a three- dimensional (3D) model of the vocal tract, creating a portable resource that can

This subset comprises volumetric MRI scans of the speaker’s vocal tract during sustained production of vowels and consonants, as well as dynamic mid- sagittal scans of

We have presented a technique to animate a kinematic tongue model, based on volumetric vo- cal tract MRI data, using skeletal animation with a flexible rig, controlled by motion

The EMA coils serve as transformation tar- gets for the tongue model rig, which is con- trolled using inverse kinematics and volu-.

acoustic-visual (AV) speech synthesizer [1], our aim is to integrate a tongue model for improved realism and visual intelligibility. The AV text-to- speech (TTS) synthesizer uses