Generating Embodied Information Presentations

    Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

    56 Downloads (Pure)

    Abstract

    The output modalities available for information presentation by embodied, human-like agents include both language and various nonverbal cues such as pointing and gesturing. These human, nonverbal modalities can be used to emphasize, extend or even replace the language output produced by the agent. To deal with the interdependence between language and nonverbal signals, their production processes should be integrated. In this chapter, we discuss the issues involved in extending a natural language generation system with the generation of nonverbal signals. We sketch a general architecture for embodied language generation, discussing the interaction between the production of nonverbal signals and language generation, and the different factors influencing the choice between the available modalities. As an example we describe the generation of route descriptions by an embodied agent in a 3D environment.
    Original languageUndefined
    Title of host publicationMultimodal Intelligent Information Presentation
    EditorsO. Stock, M Zancanaro
    Place of PublicationDordrecht
    PublisherKluwer Academic Publishers
    Pages47-69
    Number of pages23
    ISBN (Print)978-1-4020-3049-9
    DOIs
    Publication statusPublished - 2005

    Publication series

    NameText, Speech and Language Technology
    PublisherKluwer Academic Publishers
    Number27

    Keywords

    • EWI-1808
    • METIS-221067
    • IR-49078
    • HMI-MI: MULTIMODAL INTERACTIONS

    Cite this

    Theune, M., Heylen, D. K. J., & Nijholt, A. (2005). Generating Embodied Information Presentations. In O. Stock, & M. Zancanaro (Eds.), Multimodal Intelligent Information Presentation (pp. 47-69). (Text, Speech and Language Technology; No. 27). Dordrecht: Kluwer Academic Publishers. https://doi.org/10.1007/1-4020-3051-7_3