Dimensional Emotion Recognition from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Hatice Gunes, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    56 Citations (Scopus)

    Abstract

    This paper focuses on dimensional prediction of emotions from spontaneous conversational head gestures. It maps the amount and direction of head motion, and occurrences of head nods and shakes into arousal, expectation, intensity, power and valence level of the observed subject as there has been virtually no research bearing on this topic. Preliminary experiments show that it is possible to automatically predict emotions in terms of these five dimensions (arousal, expectation, intensity, power and valence) from conversational head gestures. Dimensional and continuous emotion prediction from spontaneous head gestures has been integrated in the SEMAINE project [1] that aims to achieve sustained emotionally-colored interaction between a human user and Sensitive Artificial Listeners
    Original languageUndefined
    Title of host publicationProceedings of the 10th International Conference on Intelligent Virtual Agents, IVA 2010
    EditorsJan Allbeck, Norman Badler, Timothy Bickmore, Catherine Pelachaud, Alla Safonova
    Place of PublicationBerlin
    PublisherSpringer
    Pages371-377
    Number of pages7
    ISBN (Print)978-3-642-15892-6
    DOIs
    Publication statusPublished - 21 Sep 2010
    Event10th International Conference on Intelligent Virtual Agents, IVA 2010 - Philadelphia, United States
    Duration: 20 Sep 201022 Sep 2010
    Conference number: 10

    Publication series

    NameLecture Notes in Computer Science
    PublisherSpringer Verlag
    Volume6356

    Conference

    Conference10th International Conference on Intelligent Virtual Agents, IVA 2010
    Abbreviated titleIVA
    CountryUnited States
    CityPhiladelphia
    Period20/09/1022/09/10

    Keywords

    • IR-75933
    • METIS-275891
    • Spontaneous head movements
    • HMI-MI: MULTIMODAL INTERACTIONS
    • virtual character-human interaction
    • EWI-19479
    • dimensional emotion prediction
    • EC Grant Agreement nr.: FP7/211486

    Cite this