Supporting Engagement and Floor Control in Hybrid Meetings

Hendrikus J.A. op den Akker, D.H.W. Hofs, G.H.W. Hondorp, Harm op den Akker, Jakob Zwiers, Antinus Nijholt

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    10 Citations (Scopus)

    Abstract

    Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.
    Original languageUndefined
    Title of host publicationCross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions
    EditorsAnna Esposito, Robert Vich
    Place of PublicationBerlin
    PublisherSpringer
    Pages276-290
    Number of pages15
    ISBN (Print)978-3-642-03319-3
    DOIs
    Publication statusPublished - 14 Jul 2009

    Publication series

    NameLecture Notes in Computer Science
    PublisherSpringer Verlag
    Volume5641
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Keywords

    • HMI-MI: MULTIMODAL INTERACTIONS
    • EC Grant Agreement nr.: FP6/0033812
    • METIS-265736
    • IR-67693
    • EWI-14716

    Cite this