Audiovisual Discrimination between Laughter and Speech

Stavros Petridis, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    45 Citations (Scopus)
    73 Downloads (Pure)

    Abstract

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in comparison to single-modal approaches. We also investigated the level at which audiovisual information should be fused for the best performance. When tested on 96 audiovisual sequences depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, the proposed audiovisual feature-level approach achieved a 86.9% recall rate with 76.7% precision.
    Original languageUndefined
    Title of host publicationICASSP 2008 International Conference on Acoustics, Speech and Signal Processing
    Place of PublicationLos Alamitos
    PublisherIEEE Computer Society Press
    Pages5117-5120
    Number of pages4
    ISBN (Print)978-1-4244-1483-3
    DOIs
    Publication statusPublished - Apr 2008
    EventIEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2008 - Caesars Palace, Las Vegas, United States
    Duration: 30 Mar 20084 Apr 2008

    Publication series

    Name
    PublisherIEEE Computer Society Press
    Number2008/16200
    ISSN (Print)1520-6149

    Conference

    ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2008
    Abbreviated titleICASSP
    CountryUnited States
    CityLas Vegas
    Period30/03/084/04/08

    Keywords

    • HMI-MI: MULTIMODAL INTERACTIONS
    • EC Grant Agreement nr.: FP7/211486
    • EWI-14794
    • Audiovisual data processing
    • METIS-255075
    • Nonlinguistic Information Processing
    • laughter detection
    • EC Grant Agreement nr.: FP6/0027787
    • IR-65263
    • Data Fusion

    Cite this