TY - GEN
T1 - Is this joke really funny? Judging the mirth by audiovisual laughter analysis
AU - Petridis, S.
AU - Pantic, Maja
N1 - 10.1109/ICME.2009.5202774
PY - 2009
Y1 - 2009
N2 - This paper presents the results of an empirical study suggesting that, while laughter is a very good indicator of amusement, the kind of laughter (unvoiced laughter vs.voiced laughter) is correlated with the mirth of laughter and could potential be used to judge the actual hilarity of the stimulus joke. For this study, an automated method for audiovisual analysis of laugher episodes exhibited while watching movie clips or observing the behaviour of a conversational agent has been developed. The audio and visual features, based on spectral properties of the acoustic signal and facial expressions respectively, have been integrated using feature level fusion, resulting in a multimodal approach to distinguishing voiced laughter from unvoiced laughter and speech. The classification accuracy of such a system tested on spontaneous laughter episodes is 74 %. Finally, preliminary results are presented which provide evidence that unvoiced laughter can be interpreted as less gleeful than voiced laughter and consequently the detection of those two types of laughter can be used to label multimedia content as little funny or very funny respectively.
AB - This paper presents the results of an empirical study suggesting that, while laughter is a very good indicator of amusement, the kind of laughter (unvoiced laughter vs.voiced laughter) is correlated with the mirth of laughter and could potential be used to judge the actual hilarity of the stimulus joke. For this study, an automated method for audiovisual analysis of laugher episodes exhibited while watching movie clips or observing the behaviour of a conversational agent has been developed. The audio and visual features, based on spectral properties of the acoustic signal and facial expressions respectively, have been integrated using feature level fusion, resulting in a multimodal approach to distinguishing voiced laughter from unvoiced laughter and speech. The classification accuracy of such a system tested on spontaneous laughter episodes is 74 %. Finally, preliminary results are presented which provide evidence that unvoiced laughter can be interpreted as less gleeful than voiced laughter and consequently the detection of those two types of laughter can be used to label multimedia content as little funny or very funny respectively.
KW - METIS-264325
KW - IR-69560
KW - Implicit content based indexing
KW - EWI-17211
KW - audiovisual laughter detection
KW - HMI-MI: MULTIMODAL INTERACTIONS
KW - HMI-HF: Human Factors
KW - EC Grant Agreement nr.: FP7/211486
U2 - 10.1109/ICME.2009.5202774
DO - 10.1109/ICME.2009.5202774
M3 - Conference contribution
SN - 978-1-4244-4291-1
SP - 1444
EP - 1447
BT - IEEE International Conference on Multimedia and Expo (ICME'09)
PB - IEEE
CY - Los Alamitos
T2 - IEEE International Conference on Multimedia and Expo, ICME 2009
Y2 - 28 June 2009 through 3 July 2009
ER -