Abstract
The development of embodied conversational agents (ECAs) involves a wide range of cutting-edge technologies extending from multimodal perception to reasoning to synthesis. While each is important to a successful outcome, it is the synthesis that has the most immediate impact on the observer. The specific appearance and voice of an embodied conversational agent (ECA) can be decisive factors in meeting its social objectives. In light of this, we have developed an extensively customizable system for synthesizing a virtual talking 3D head. Rather than requiring explicit integration into a codebase, our software runs as a service that can be controlled by any external client, which substantially simplifies its deployment into new applications. We have explored the benefits of this approach across several internal research projects and student exercises as part of a university topic on ECAs.
| Original language | English |
|---|---|
| Pages | 486-495 |
| Number of pages | 10 |
| DOIs | |
| Publication status | Published - 1 Dec 2010 |
| Event | 23rd Australasian Joint Conference on Artificial Intelligence - Duration: 8 Dec 2010 → … |
Conference
| Conference | 23rd Australasian Joint Conference on Artificial Intelligence |
|---|---|
| Period | 8/12/10 → … |
Keywords
- audiovisual speech synthesis
- Embodied conversational agents
- software library
Fingerprint
Dive into the research topics of 'Head X: Customizable Audiovisual Synthesis for a Multi-purpose Virtual Head'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver