Speaker-adaptive multimodal prediction model for listener responses

Iwan de Kok, Dirk Heylen, Louis-Philippe Morency

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    3 Citations (Scopus)
    34 Downloads (Pure)

    Abstract

    The goal of this paper is to analyze and model the variability in speaking styles in dyadic interactions and build a predictive algorithm for listener responses that is able to adapt to these different styles. The end result of this research will be a virtual human able to automatically respond to a human speaker with proper listener responses (e.g., head nods). Our novel speaker-adaptive prediction model is created from a corpus of dyadic interactions where speaker variability is analyzed to identify a subset of prototypical speaker styles. During a live interaction our prediction model automatically identifies the closest prototypical speaker style and predicts listener responses based on this 'communicative style'. Central to our approach is the idea of 'speaker profile' which uniquely identifies each speaker and enables the matching between prototypical speakers and new speakers. The paper shows the merits of our speaker-adaptive listener response prediction model by showing improvement over a state-of-the-art approach which does not adapt to the speaker. Besides the merits of speaker-adaption, our experiments highlights the importance of using multimodal features when comparing speakers to select the closest prototypical speaker style.
    Original languageEnglish
    Title of host publicationICMI '13
    Subtitle of host publicationProceedings of the 2013 ACM International Conference on Multimodal Interaction, December 9-13, 2013, Sydney, Australia
    EditorsJulien Epps
    Place of PublicationNew York
    PublisherAssociation for Computing Machinery (ACM)
    Pages51-58
    Number of pages8
    ISBN (Print)978-1-4503-2129-7
    DOIs
    Publication statusPublished - 9 Dec 2013
    Event15th International Conference on Multimodal Interaction, ICMI 2013 - Sydney, Australia
    Duration: 9 Dec 201313 Dec 2013
    Conference number: 15

    Conference

    Conference15th International Conference on Multimodal Interaction, ICMI 2013
    Abbreviated titleICMI
    CountryAustralia
    CitySydney
    Period9/12/1313/12/13

    Keywords

    • EWI-24317
    • METIS-302648
    • IR-89320
    • HMI-IA: Intelligent Agents

    Fingerprint Dive into the research topics of 'Speaker-adaptive multimodal prediction model for listener responses'. Together they form a unique fingerprint.

    Cite this