ARCHIVUS: A System for Accessing the Content of Recorded Multimodal Meetings

Agnes Lisowska, Martin Rajman, Trung H. Bui

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    5 Citations (Scopus)
    19 Downloads (Pure)

    Abstract

    This paper describes a multimodal dialogue driven system, ARCHIVUS, that allows users to access and retrieve the content of recorded and annotated multimodal meetings. We describe (1) a novel approach taken in designing the system given the relative inapplicability of standard user requirements elicitation methodologies, (2) the components of ARCHIVUS, and (3) the methodologies that we plan to use to evaluate the system.
    Original languageEnglish
    Title of host publicationMachine Learning for Multimodal Interaction
    Subtitle of host publicationFirst International Workshop, MLMI 2004, Martigny, Switzerland, June 21-23, 2004, Revised Selected Papers
    EditorsHervé Bourlard, Samy Bengio
    Place of PublicationBerlin
    PublisherSpringer
    Pages291-304
    Number of pages14
    ISBN (Electronic)978-3-540-30568-2
    ISBN (Print)978-3-540-24509-4
    DOIs
    Publication statusPublished - 2005
    Event1st International Workshop on Machine Learning for Multimodal Interaction, MLMI 2004 - Martigny, Switzerland
    Duration: 21 Jun 200423 Jun 2004
    Conference number: 1

    Publication series

    NameLecture Notes in Computer Science
    PublisherSpringer
    Volume3361
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Workshop

    Workshop1st International Workshop on Machine Learning for Multimodal Interaction, MLMI 2004
    Abbreviated titleMLMI
    CountrySwitzerland
    CityMartigny
    Period21/06/0423/06/04

    Keywords

    • HMI-MI: MULTIMODAL INTERACTIONS
    • HMI-SLT: Speech and Language Technology
    • User requirement
    • Automatic speech recognition
    • Task models
    • Input modality
    • Multimodal interfaces

    Cite this