Abstract
Fusing multiple continuous expert annotations is a crucial problem in machine learning and computer vision, particularly when dealing with uncertain and subjective tasks related to affective behavior. Inspired by the concept of inferring shared and individual latent spaces in Probabilistic Canonical Correlation Analysis (PCCA), we propose a novel, generative model that discovers temporal dependencies on the shared/individual spaces (Dynamic Probabilistic CCA, DPCCA). In order to accommodate for temporal lags, which are prominent amongst continuous annotations, we further introduce a latent warping process, leading to the DPCCA with Time Warpings (DPCTW) model. Finally, we propose two supervised variants of DPCCA/DPCTW which incorporate inputs (i.e., visual or audio features), both in a generative (SG-DPCCA) and discriminative manner (SD-DPCCA). We show that the resulting family of models (i) can be used as a unifying framework for solving the problems of temporal alignment and fusion of multiple annotations in time, (ii) can automatically rank and filter annotations based on latent posteriors or other model statistics, and (iii) that by incorporating dynamics, modeling annotation-specific biases, noise estimation, time warping and supervision, DPCTW outperforms state-of-the-art methods for both the aggregation of multiple, yet imperfect expert annotations as well as the alignment of affective behavior.
Original language | Undefined |
---|---|
Pages (from-to) | 1299-1311 |
Number of pages | 13 |
Journal | IEEE transactions on pattern analysis and machine intelligence |
Volume | 36 |
Issue number | 7 |
DOIs | |
Publication status | Published - Jul 2014 |
Keywords
- EWI-25808
- HMI-HF: Human Factors
- affect analysis
- Temporal alignment
- dimensional emotion
- IR-95241
- EC Grant Agreement nr.: FP7/2007-2013
- Component Analysis
- Fusion of continuous annotations
- METIS-309940
- EC Grant Agreement nr.: FP7/288235