How to Distinguish Posed from Spontaneous Smiles using Geometric Features

Michel F. Valstar, Hatice Gunes, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    159 Citations (Scopus)
    11 Downloads (Pure)


    Automatic distinction between posed and spontaneous ex- pressions is an unsolved problem. Previously cognitive sci- ences’ studies indicated that the automatic separation of posed from spontaneous expressions is possible using the face modality. However, little is known about the information from head and shoulder motion. In this work, we propose to (i) distinguish between posed and spontaneous smiles by fusing head, face, and shoulder modalities, (ii) investigate which modalities carry important information and how the modalities relate to each other, and (iii) to which extent the temporal dynamics of these signals attribute to solving the problem. A cylindrical head tracker is used to track head motion and two particle filtering techniques to track facial and shoulder motion. Classification is performed by kernel methods combined with ensemble learning techniques. We investigated two aspects of multimodal fusion: the level of abstraction (i.e., early, mid-level, and late fusion) and the fusion rule used (i.e., sum, product and weight criteria). Ex- perimental results from 100 videos displaying posed smiles and 102 videos displaying spontaneous smiles are presented. Best results were obtained with late fusion of all modalities when 94.0% of the videos were classified correctly.
    Original languageUndefined
    Title of host publicationProceedings of ACM Int'l Conf. Multimodal Interfaces (ICMI'07)
    Place of PublicationNew York, NY, USA
    PublisherAssociation for Computing Machinery
    Number of pages8
    ISBN (Print)978-1-59593-817-6
    Publication statusPublished - 14 Nov 2007
    Event9th International Conference on Multimodal Interfaces, ICMI 2007 - Nagoya, Japan
    Duration: 12 Nov 200715 Nov 2007
    Conference number: 9

    Publication series



    Conference9th International Conference on Multimodal Interfaces, ICMI 2007
    Abbreviated titleICMI


    • METIS-245908
    • IR-64566
    • EWI-11668

    Cite this