Abstract
In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the TNO-GAMING corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out.
Original language | English |
---|---|
Title of host publication | Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) |
Publisher | International Speech Communication Association |
Pages | 2027-2030 |
Number of pages | 4 |
Publication status | Published - 2009 |
Event | 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009 - Brighton, United Kingdom Duration: 6 Sept 2009 → 10 Sept 2009 Conference number: 10 http://www.interspeech2010.jpn.org/ |
Publication series
Name | Publications Speech Processing Group, Brno University of Technology |
---|---|
Publisher | International Speech Communication Association |
ISSN (Print) | 1990-9772 |
Conference
Conference | 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009 |
---|---|
Abbreviated title | INTERSPEECH |
Country/Territory | United Kingdom |
City | Brighton |
Period | 6/09/09 → 10/09/09 |
Internet address |
Keywords
- IR-68948
- EWI-17024
- METIS-264250