Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.
|Name||Lecture Notes in Computer Science|
|Conference||Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, Prague|
|Period||14/07/09 → …|
- HMI-MI: MULTIMODAL INTERACTIONS
- EC Grant Agreement nr.: FP6/0033812