Abstract
Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train predictive models which model the spatiotemporal relationship between audio and visual features by learning the audio-to-visual and visual-to-audio feature mapping for each class. Similarly, we train predictive models which model the time evolution of audio and visual features by learning the past-to-future feature mapping for each class. In classification, all the class-specific regression models produce a prediction of the expected audio / visual features and their prediction errors are combined for each class. The set of class-specific regressors which best describes the audiovisual feature relationship, i.e., results in the lowest prediction error, is chosen to label the input frame. We perform cross-database experiments, using the AMI, SAL and MAHNOB databases, in order to classify laughter and speech and subject-independent experiments on the AVIC database in order to classify laughter, hesitation and consent. In virtually all cases prediction-based audiovisual fusion consistently outperforms the two most commonly used fusion approaches, decision-level and feature-level fusion.
Original language | English |
---|---|
Pages (from-to) | 45-58 |
Number of pages | 14 |
Journal | IEEE transactions on affective computing |
Volume | 7 |
Issue number | 1 |
DOIs | |
Publication status | Published - Feb 2016 |
Keywords
- EWI-26753
- HMI-HF: Human Factors
- EC Grant Agreement nr.: FP7/611153
- IR-99374
- Prediction-based Fusion
- Audio-visual Fusion
- METIS-315566
- Nonlinguistic Vocalisation Classification
- n/a OA procedure