Development of Multimodal Interfaces: Active Listening and Synchrony

Anna Esposito, Nick Campbell (Editor), Carl Vogel, Amir Hussain, Antinus Nijholt (Editor)

    Research output: Book/ReportBook editingAcademic

    Abstract

    This volume brings together, through a peer-revision process, the advanced research results obtained by the European COST Action 2102: Cross-Modal Analysis of Verbal and Nonverbal Communication, primarily discussed for the first time at the Second COST 2102 International Training School on “Development of Multimodal Interfaces: Active Listening and Synchrony��? held in Dublin, Ireland, March 23–27 2009. The school was sponsored by COST (European Cooperation in the Field of Scientific and Technical Research, www.cost.esf.org ) in the domain of Information and Communication Technologies (ICT) for disseminating the advances of the research activities developed within the COST Action 2102: “Cross-Modal Analysis of Verbal and Nonverbal Communication��? (cost2102.cs.stir.ac.uk) COST Action 2102 in its third year of life brought together about 60 European and 6 overseas scientific laboratories whose aim is to develop interactive dialogue systems and intelligent virtual avatars graphically embodied in a 2D and/or 3D interactive virtual world, capable of interacting intelligently with the environment, other avatars, and particularly with human users. The main focus of the school was the development of multimodal interfaces. Traditional approaches to multimodal interface design tend to assume a “ping-pong��? or “push-to-talk��? approach to speech interaction wherein either the system or the human interlocutor is active at any one time. This is contrary to many recent findings in conversation and discourse analysis, where the definition of a “turn��? or even an “utterance��? is found to be very complex. People don’t “take turns��? to talk in a typical conversational interaction, but they each contribute actively to the joint emergence of a “common understanding.��? The sub-theme of the school was “Synchrony and Active Listening��? selected with the idea to identify contributions that actively give support to the ongoing research into the dynamics of human spoken interaction, to the production of multimodal conversation data and to the subsequent analysis and modelling of interaction dynamics, with the dual goal of appropriately designing multimodal interfaces, as well as providing new approaches and developmental paradigms.
    Original languageUndefined
    Place of PublicationHeidelberg
    PublisherSpringer
    Number of pages446
    ISBN (Print)978-3-642-12396-2
    DOIs
    Publication statusPublished - 27 Mar 2010

    Publication series

    NameLecture Notes in Computer Science
    PublisherSpringer Verlag
    Volume5967
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Keywords

    • active listening
    • Multi-modal interaction
    • METIS-270732
    • EWI-17439
    • IR-70724
    • HMI-MI: MULTIMODAL INTERACTIONS
    • Synchrony
    • cross-modality

    Cite this

    Esposito, A., Campbell, N. (Ed.), Vogel, C., Hussain, A., & Nijholt, A. (Ed.) (2010). Development of Multimodal Interfaces: Active Listening and Synchrony. (Lecture Notes in Computer Science; Vol. 5967). Heidelberg: Springer. https://doi.org/10.1007/978-3-642-12397-9