Audiovisual laughter detection based on temporal features

Stavros Petridis, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    69 Downloads (Pure)


    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved performance over single-modal approaches. Static features are extracted on an audio/video frame basis and then combined with temporal features extracted over a temporal window, describing the evolution of static features over time. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves an F1 rate of over 89%.
    Original languageEnglish
    Title of host publicationBNAIC 2008
    Subtitle of host publicationProceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference, Enschede/Bad Boekelo, October 30-31, 2008
    EditorsAnton Nijholt, Maja Pantic, Mannes Poel, Hendri Hondorp
    PublisherUniversity of Twente
    Number of pages2
    Publication statusPublished - 30 Oct 2008
    Event20th Benelux Conference on Artificial Intelligence, BNAIC 2008 - Boekelo, Netherlands
    Duration: 30 Oct 200831 Oct 2008
    Conference number: 20

    Publication series

    NameBNAIC: proceedings of the ... Belgium/Netherlands Artificial Intelligence Conference
    PublisherUniversity of Twente
    ISSN (Print)1568-7805


    Conference20th Benelux Conference on Artificial Intelligence, BNAIC 2008
    Abbreviated titleBNAIC


    • EC Grant Agreement nr.: FP7/211486
    • EC Grant Agreement nr.: FP6/0027787


    Dive into the research topics of 'Audiovisual laughter detection based on temporal features'. Together they form a unique fingerprint.

    Cite this