A dynamic texture based approach to recognition of facial actions and their temporal models

Sander Koelstra, Maja Pantic, Ioannis (Yannis) Patras

    Research output: Contribution to journalArticleAcademicpeer-review

    220 Citations (Scopus)
    23 Downloads (Pure)


    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.
    Original languageUndefined
    Pages (from-to)1940-1954
    Number of pages15
    JournalIEEE transactions on pattern analysis and machine intelligence
    Issue number11
    Publication statusPublished - Nov 2010


    • dynamic texture
    • EC Grant Agreement nr.: FP7/211486
    • EC Grant Agreement nr.: FP7/216444
    • EWI-19457
    • IR-75886
    • Motion
    • Facial Expression
    • Facial image analysis
    • METIS-276325

    Cite this