Communicative Signals and Social Contextual Factors in Multimodal Affect Recognition

  • Michel-Pierre Jansen (Speaker)

Activity: Talk or presentationOral presentation

Description

One research branch in Affective Computing focuses on using multimodal ‘emotional’ expressions (e.g. facial expressions or non-verbal vocalisations) to automatically detect emotions and affect experienced by persons. The field is increasingly interested in using contextual factors to better infer emotional expressions rather than solely relying on the emotional expressions by themselves. We are interested in expressions that occur in a social context. In our research we plan to investigate how we can; a) utilise communicative signals that are displayed during interactions to recognise social contextual factors that influence emotion expression and in turn b) predict/recognise what these emotion expressions are most likely communicating considering the context. To achieve this, we formulate three main research questions: I) How do communicative signals such as emotion expressions co-ordinate behaviours and knowledge between interlocutors in interactive settings?, II) Can we use behavioural cues during interactions to detect social contextual factors relevant for interpreting affect? and III) Can we use social contextual factors and communicative signals to predict what emotion experience is linked to an emotion expression?
Period1 Oct 201930 Oct 2019
Event title21st ACM International Conference on Multimodal Interaction, ICMI 2019
Event typeConference
Conference number21
LocationSuzhou, ChinaShow on map
Degree of RecognitionInternational