Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

H. van Welbergen, J. Hendler (Editor), D. Goren-Bar (Editor), Antinus Nijholt, Dennis Reidsma, O. Mayora-Ibarra (Editor), Jakob Zwiers

    Research output: Contribution to journalArticleAcademicpeer-review

    16 Citations (Scopus)
    57 Downloads (Pure)

    Abstract

    Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.
    Original languageUndefined
    Pages (from-to)47-53
    Number of pages7
    JournalIEEE intelligent systems
    Volume21
    Issue number10/5
    DOIs
    Publication statusPublished - Sep 2006

    Keywords

    • multimodal generation presenting virtual reality
    • EWI-8372
    • Embodied Conversational Agents
    • IR-57706
    • METIS-237683

    Cite this

    van Welbergen, H. ; Hendler, J. (Editor) ; Goren-Bar, D. (Editor) ; Nijholt, Antinus ; Reidsma, Dennis ; Mayora-Ibarra, O. (Editor) ; Zwiers, Jakob. / Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information. In: IEEE intelligent systems. 2006 ; Vol. 21, No. 10/5. pp. 47-53.
    @article{a516633197764368ad94ba71b4766636,
    title = "Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information",
    abstract = "Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.",
    keywords = "multimodal generation presenting virtual reality, EWI-8372, Embodied Conversational Agents, IR-57706, METIS-237683",
    author = "{van Welbergen}, H. and J. Hendler and D. Goren-Bar and Antinus Nijholt and Dennis Reidsma and O. Mayora-Ibarra and Jakob Zwiers",
    note = "10.1109/mis.2006.101",
    year = "2006",
    month = "9",
    doi = "10.1109/mis.2006.101",
    language = "Undefined",
    volume = "21",
    pages = "47--53",
    journal = "IEEE intelligent systems",
    issn = "1541-1672",
    publisher = "IEEE",
    number = "10/5",

    }

    Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information. / van Welbergen, H.; Hendler, J. (Editor); Goren-Bar, D. (Editor); Nijholt, Antinus; Reidsma, Dennis; Mayora-Ibarra, O. (Editor); Zwiers, Jakob.

    In: IEEE intelligent systems, Vol. 21, No. 10/5, 09.2006, p. 47-53.

    Research output: Contribution to journalArticleAcademicpeer-review

    TY - JOUR

    T1 - Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information

    AU - van Welbergen, H.

    AU - Nijholt, Antinus

    AU - Reidsma, Dennis

    AU - Zwiers, Jakob

    A2 - Hendler, J.

    A2 - Goren-Bar, D.

    A2 - Mayora-Ibarra, O.

    N1 - 10.1109/mis.2006.101

    PY - 2006/9

    Y1 - 2006/9

    N2 - Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.

    AB - Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support.Multimedia presentation of this captured information requires a lot of attention. Our previous research has looked at including in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology that translates captured meeting activities into a virtual-reality version that lets us add and manipulate information.1 In that research, our starting point was the human presenter or meeting participant. Here, it’s a semiautonomous virtual presenter that performs in a virtual- reality environment (see figure 1). The presenter’s audience might consist of humans, humans represented by embodied virtual agents, and autonomous agents that are visiting the virtual lecture room or have roles in it. In this article, we focus on models and associated algorithms that steer the virtual presenter’s presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. This channel can also present material other than sheets, such as annotated paintings or movies.

    KW - multimodal generation presenting virtual reality

    KW - EWI-8372

    KW - Embodied Conversational Agents

    KW - IR-57706

    KW - METIS-237683

    U2 - 10.1109/mis.2006.101

    DO - 10.1109/mis.2006.101

    M3 - Article

    VL - 21

    SP - 47

    EP - 53

    JO - IEEE intelligent systems

    JF - IEEE intelligent systems

    SN - 1541-1672

    IS - 10/5

    ER -