Continuous prediction of spontaneous affect from multiple cues and modalities in valance-arousal space

Mihalis A. Nicolaou, Hatice Gunes, Maja Pantic

    Research output: Contribution to journalArticleAcademicpeer-review

    294 Citations (Scopus)

    Abstract

    Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific interpretations of affective displays recorded in naturalistic and real-world settings, and toward multimodal analysis and recognition of human affect. Converging with this shift, this paper presents, to the best of our knowledge, the first approach in the literature that: 1) fuses facial expression, shoulder gesture, and audio cues for dimensional and continuous prediction of emotions in valence and arousal space, 2) compares the performance of two state-of-the-art machine learning techniques applied to the target problem, the bidirectional Long Short-Term Memory neural networks (BLSTM-NNs), and Support Vector Machines for Regression (SVR), and 3) proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions. Evaluation of the proposed approach has been done using the spontaneous SAL data from four subjects and subject-dependent leave-one-sequence-out cross validation. The experimental results obtained show that: 1) on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, 2) the proposed output-associative fusion framework outperforms feature-level and model-level fusion by modeling and learning correlations and patterns between the valence and arousal dimensions, and 3) the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders.
    Original languageUndefined
    Pages (from-to)92-105
    Number of pages14
    JournalIEEE transactions on affective computing
    Volume2
    Issue number2
    DOIs
    Publication statusPublished - Apr 2011

    Keywords

    • HMI-MI: MULTIMODAL INTERACTIONS
    • emotional acoustic signals
    • output-associative fusion.
    • continuous affect prediction
    • valence and arousal dimensions
    • shoulder gestures
    • IR-79392
    • EWI-21287
    • Dimensional affect recognition
    • Facial expressions
    • multicue and multimodal fusion
    • METIS-285005
    • EC Grant Agreement nr.: FP7/211486

    Cite this