Adding Sign Language Animation to the ECHOES Multimodal Technology Enhanced Learning Environment

Elaine Farrow and Oliver Lemon


Summary

ECHOES (Foster et al., 2010) is a multimodal technology enhanced learning (TEL) environment which uses a child-like virtual human avatar to interact with young children (age 5-7) in a series of social games. It provides opportunities for both typically developing (TD) children and those with autistic spectrum disorders (ASD) to practise social interaction and communication skills such as turn-taking and collaboration in a safe and predictable environment.

The virtual character is driven by an intelligent agent based on the FAtiMA architecture (Dias and Paiva, 2005), capable of planning, goal-directed behaviour and emotional affect. The system uses recordings of a real child's voice in conjunction with facial expressions and purposefully directed gaze to interact with the child, encourage joint attention, and create emergent narratives. The avatar can also perform a small set of manual gestures in Makaton ("your turn", "my turn", and "all done").

We propose to translate the voice-recorded phrases into both British Sign Language(BSL) and Sign Supported English (SSE) and integrate them with the other aspects of the character's communication including gaze and pointing. This will allow the system to be used by deaf children, both ASD and TD, providing us with insights into this under-researched population and an indication of the potential of technology-enhanced learning to aid their development.

Categories

Main Topic:  AVATAR ANIMATION

Keywords

Requirements for signing avatar technology
Realistic animation of manual and bodily gestures
Realism and acceptability of signing avatars

File(s)

[Paper (PDF)]  

Supported by START Conference Manager