Supporting Engagement and Floor Control in Hybrid Meetings

Hendrikus J.A. op den Akker, D.H.W. Hofs, G.H.W. Hondorp, Harm op den Akker, Jakob Zwiers, Antinus Nijholt

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

9 Citations (Scopus)

Abstract

Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.
Original languageUndefined
Title of host publicationCross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions
EditorsAnna Esposito, Robert Vich
Place of PublicationBerlin
PublisherSpringer
Pages276-290
Number of pages15
ISBN (Print)978-3-642-03319-3
DOIs
Publication statusPublished - 14 Jul 2009

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Verlag
Volume5641
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Keywords

  • HMI-MI: MULTIMODAL INTERACTIONS
  • EC Grant Agreement nr.: FP6/0033812
  • METIS-265736
  • IR-67693
  • EWI-14716

Cite this

op den Akker, H. J. A., Hofs, D. H. W., Hondorp, G. H. W., op den Akker, H., Zwiers, J., & Nijholt, A. (2009). Supporting Engagement and Floor Control in Hybrid Meetings. In A. Esposito, & R. Vich (Eds.), Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions (pp. 276-290). [10.1007/978-3-642-03320-9_26] (Lecture Notes in Computer Science; Vol. 5641). Berlin: Springer. https://doi.org/10.1007/978-3-642-03320-9_26
op den Akker, Hendrikus J.A. ; Hofs, D.H.W. ; Hondorp, G.H.W. ; op den Akker, Harm ; Zwiers, Jakob ; Nijholt, Antinus. / Supporting Engagement and Floor Control in Hybrid Meetings. Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions. editor / Anna Esposito ; Robert Vich. Berlin : Springer, 2009. pp. 276-290 (Lecture Notes in Computer Science).
@inproceedings{5af4b25842f046ae9cb20404a628972f,
title = "Supporting Engagement and Floor Control in Hybrid Meetings",
abstract = "Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.",
keywords = "HMI-MI: MULTIMODAL INTERACTIONS, EC Grant Agreement nr.: FP6/0033812, METIS-265736, IR-67693, EWI-14716",
author = "{op den Akker}, {Hendrikus J.A.} and D.H.W. Hofs and G.H.W. Hondorp and {op den Akker}, Harm and Jakob Zwiers and Antinus Nijholt",
note = "10.1007/978-3-642-03320-9_26",
year = "2009",
month = "7",
day = "14",
doi = "10.1007/978-3-642-03320-9_26",
language = "Undefined",
isbn = "978-3-642-03319-3",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "276--290",
editor = "Anna Esposito and Robert Vich",
booktitle = "Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions",

}

op den Akker, HJA, Hofs, DHW, Hondorp, GHW, op den Akker, H, Zwiers, J & Nijholt, A 2009, Supporting Engagement and Floor Control in Hybrid Meetings. in A Esposito & R Vich (eds), Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions., 10.1007/978-3-642-03320-9_26, Lecture Notes in Computer Science, vol. 5641, Springer, Berlin, pp. 276-290. https://doi.org/10.1007/978-3-642-03320-9_26

Supporting Engagement and Floor Control in Hybrid Meetings. / op den Akker, Hendrikus J.A.; Hofs, D.H.W.; Hondorp, G.H.W.; op den Akker, Harm; Zwiers, Jakob; Nijholt, Antinus.

Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions. ed. / Anna Esposito; Robert Vich. Berlin : Springer, 2009. p. 276-290 10.1007/978-3-642-03320-9_26 (Lecture Notes in Computer Science; Vol. 5641).

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Supporting Engagement and Floor Control in Hybrid Meetings

AU - op den Akker, Hendrikus J.A.

AU - Hofs, D.H.W.

AU - Hondorp, G.H.W.

AU - op den Akker, Harm

AU - Zwiers, Jakob

AU - Nijholt, Antinus

N1 - 10.1007/978-3-642-03320-9_26

PY - 2009/7/14

Y1 - 2009/7/14

N2 - Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.

AB - Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.

KW - HMI-MI: MULTIMODAL INTERACTIONS

KW - EC Grant Agreement nr.: FP6/0033812

KW - METIS-265736

KW - IR-67693

KW - EWI-14716

U2 - 10.1007/978-3-642-03320-9_26

DO - 10.1007/978-3-642-03320-9_26

M3 - Conference contribution

SN - 978-3-642-03319-3

T3 - Lecture Notes in Computer Science

SP - 276

EP - 290

BT - Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions

A2 - Esposito, Anna

A2 - Vich, Robert

PB - Springer

CY - Berlin

ER -

op den Akker HJA, Hofs DHW, Hondorp GHW, op den Akker H, Zwiers J, Nijholt A. Supporting Engagement and Floor Control in Hybrid Meetings. In Esposito A, Vich R, editors, Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions. Berlin: Springer. 2009. p. 276-290. 10.1007/978-3-642-03320-9_26. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-642-03320-9_26