Spontaneous vs. posed facial behavior: Automatic analysis of brow actions

M.F. Valstar, Maja Pantic, Z. Ambadar, J.F. Cohn

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    128 Citations (Scopus)

    Abstract

    Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.
    Original languageUndefined
    Title of host publicationProceedings of ACM Int'l Conf. Multimodal Interfaces (ICMI'06)
    Place of PublicationNew York, NY, USA
    PublisherAssociation for Computing Machinery (ACM)
    Pages162-170
    Number of pages9
    ISBN (Print)1-59593-541-X
    DOIs
    Publication statusPublished - 3 Nov 2006
    Event8th International Conference on Multimodal Interfaces, ICMI 2006 - Banff, Canada
    Duration: 2 Nov 20064 Nov 2006
    Conference number: 8

    Publication series

    Name
    PublisherACM
    Number10

    Conference

    Conference8th International Conference on Multimodal Interfaces, ICMI 2006
    Abbreviated titleICMI
    CountryCanada
    CityBanff
    Period2/11/064/11/06

    Keywords

    • HMI-MI: MULTIMODAL INTERACTIONS
    • METIS-248237
    • EWI-11628
    • IR-62076

    Cite this