Audiovisual laughter detection based on temporal features

Stavros Petridis, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    30 Citations (Scopus)
    7 Downloads (Pure)

    Abstract

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laughter from speech based on temporal features and we show that integrating the information from audio and video channels leads to improved performance over single-modal approaches. Static features are extracted on an audio/video frame basis and then combined with temporal features extracted over a temporal window, describing the evolution of static features over time. The use of several different temporal features has been investigated and it has been shown that the addition of temporal information results in an improved performance over utilizing static information only. It is common to use a fixed set of temporal features which implies that all static features will exhibit the same behaviour over a temporal window. However, this does not always hold and we show that when AdaBoost is used as a feature selector, different temporal features for each static feature are selected, i.e., the temporal evolution of each static feature is described by different statistical measures. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves an F1 rate of over 89%.
    Original languageEnglish
    Title of host publicationICMI '08
    Subtitle of host publicationProceedings of the 10th International Conference on Multimodal Interfaces
    Place of PublicationNew York
    PublisherAssociation for Computing Machinery
    Pages37-44
    Number of pages8
    ISBN (Print)978-1-60558-198-9
    DOIs
    Publication statusPublished - Oct 2008
    Event10th International Conference on Multimodal Interfaces, ICMI 2008 - Chania, Crete, Greece
    Duration: 20 Oct 200822 Oct 2008
    Conference number: 10

    Conference

    Conference10th International Conference on Multimodal Interfaces, ICMI 2008
    Abbreviated titleICMI
    Country/TerritoryGreece
    CityChania, Crete
    Period20/10/0822/10/08

    Keywords

    • EC Grant Agreement nr.: FP7/211486
    • EC Grant Agreement nr.: FP6/0027787
    • HMI-MI: MULTIMODAL INTERACTIONS
    • Audiovisual data processing
    • Laughter detection
    • Computing methodologies
    • Non-linguistic information processing

    Fingerprint

    Dive into the research topics of 'Audiovisual laughter detection based on temporal features'. Together they form a unique fingerprint.
    • Audiovisual laughter detection based on temporal features

      Petridis, S. & Pantic, M., 30 Oct 2008, BNAIC 2008: Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference, Enschede/Bad Boekelo, October 30-31, 2008. Nijholt, A., Pantic, M., Poel, M. & Hondorp, H. (eds.). University of Twente, p. 351-352 2 p. (BNAIC: proceedings of the ... Belgium/Netherlands Artificial Intelligence Conference; no. 20).

      Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

      Open Access
      File

    Cite this