Synthesizing American Sign Language Spatially Inflected Verbs from Motion-Capture Data

Pengfei Lu and Matt Huenerfauth


Summary

People who are deaf or hard-of-hearing who have lower levels of written-language literacy can benefit from computer-synthesized animations of sign language, which present information in a more accessible form. This paper introduces a novel method for modeling and synthesizing American Sign Language (ASL) animations based on motion-capture data collected from native signers. This technique allows for the synthesis of animations of ASL signs whose performance varies based on the context of the sentence; in particular, this paper focuses on inflecting verb signs whose performance is affected by the arrangement of locations in 3D space that represent entities under discussion. Mathematical models of hand location were trained on motion-capture recordings of a human producing inflected verb signs. In an evaluation study with 12 native signers, the ASL animations synthesized from the model were judged to be of similar quality to animations produced by a human animator. This animation technique is applicable to other ASL signs and other sign languages used internationally – to increase the repertoire of sign language animation generation systems or to partially automate the work of humans using sign language animation scripting tools.

Categories

Main Topic:  AVATAR ANIMATION

Keywords

Realistic animation of manual and bodily gestures
Realism and acceptability of signing avatars
Use of corpora to inform animation

File(s)

[Paper (PDF)]  

Supported by START Conference Manager