Abstract
This paper describes a multimodal dialogue driven system, ARCHIVUS, that allows users to access and retrieve the content of recorded and annotated multimodal meetings. We describe (1) a novel approach taken in designing the system given the relative inapplicability of standard user requirements elicitation methodologies, (2) the components of ARCHIVUS, and (3) the methodologies that we plan to use to evaluate the system.
Original language | English |
---|---|
Title of host publication | Machine Learning for Multimodal Interaction |
Subtitle of host publication | First International Workshop, MLMI 2004, Martigny, Switzerland, June 21-23, 2004, Revised Selected Papers |
Editors | Hervé Bourlard, Samy Bengio |
Place of Publication | Berlin |
Publisher | Springer |
Pages | 291-304 |
Number of pages | 14 |
ISBN (Electronic) | 978-3-540-30568-2 |
ISBN (Print) | 978-3-540-24509-4 |
DOIs | |
Publication status | Published - 2005 |
Event | 1st International Workshop on Machine Learning for Multimodal Interaction, MLMI 2004 - Martigny, Switzerland Duration: 21 Jun 2004 → 23 Jun 2004 Conference number: 1 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
Volume | 3361 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Workshop
Workshop | 1st International Workshop on Machine Learning for Multimodal Interaction, MLMI 2004 |
---|---|
Abbreviated title | MLMI |
Country/Territory | Switzerland |
City | Martigny |
Period | 21/06/04 → 23/06/04 |
Keywords
- HMI-MI: MULTIMODAL INTERACTIONS
- HMI-SLT: Speech and Language Technology
- User requirement
- Automatic speech recognition
- Task models
- Input modality
- Multimodal interfaces