This paper focuses on automatic segmentation of spontaneous data using continuous dimensional labels from multiple coders. It introduces efficient algorithms to the aim of (i) producing ground-truth by maximizing inter-coder agreement, (ii) eliciting the frames or samples that capture the transition to and from an emotional state, and (iii) automatic segmentation of spontaneous audio-visual data to be used by machine learning techniques that cannot handle unsegmented sequences. As a proof of concept, the algorithms introduced are tested using data annotated in arousal and valence space. However, they can be straightforwardly applied to data annotated in other continuous emotional spaces, such as power and expectation.
|Publisher||German Research Center for AI (DFKI)|
|Workshop||Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality|
|Period||18/05/10 → 18/05/10|
|Other||18 May 2010|
- EC Grant Agreement nr.: FP7/211486
- HMI-MI: MULTIMODAL INTERACTIONS