Fully Automatic Recognition of the Temporal Phases of Facial Actions

M.F. Valstar, Maja Pantic

    Research output: Contribution to journalArticleAcademicpeer-review

    212 Citations (Scopus)

    Abstract

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.
    Original languageUndefined
    Pages (from-to)28-43
    Number of pages16
    JournalIEEE transactions on systems, man, and cybernetics. Part B: Cybernetics
    Volume42
    Issue number1
    DOIs
    Publication statusPublished - Feb 2012

    Keywords

    • particle filtering
    • support vector machine (SVM)
    • spatiotemporal facial behavior analysis
    • EWI-22961
    • IR-84221
    • GentleBoost
    • Facial Expression Analysis
    • METIS-296253
    • HMI-MI: MULTIMODAL INTERACTIONS

    Cite this