Abstract
Interview data is multimodal data: it consists of speech sound, facial expression and gestures, captured in a particular situation, and containing textual information and emotion. This workshop shows how a multidisciplinary approach may exploit the full potential of interview data. The workshop first gives a systematic overview of the research fields working with interview data. It then presents the speech technology currently available to support transcribing and annotating interview data, such as automatic speech recognition, speaker diarization, and emotion detection. Finally, scholars who work with interview data and tools may present their work and discover how to make use of existing technology.
Original language | English |
---|---|
Title of host publication | ICMI '20 |
Subtitle of host publication | Proceedings of the 2020 International Conference on Multimodal Interaction |
Place of Publication | New York, NY |
Publisher | Association for Computing Machinery |
Pages | 886-887 |
Number of pages | 2 |
ISBN (Electronic) | 978-1-4503-7581-8 |
DOIs | |
Publication status | Published - 22 Oct 2020 |
Event | 22nd ACM International Conference on Multimodal Interaction, ICMI 2020 - Online, Virtual, Online, Netherlands Duration: 25 Oct 2020 → 29 Oct 2020 Conference number: 22 http://icmi.acm.org/2020/ |
Conference
Conference | 22nd ACM International Conference on Multimodal Interaction, ICMI 2020 |
---|---|
Abbreviated title | ICMI |
Country/Territory | Netherlands |
City | Virtual, Online |
Period | 25/10/20 → 29/10/20 |
Internet address |
Keywords
- Annotation
- Emotion detection
- Interview data
- NLP
- Speech processing
- Transcription
- 22/2 OA procedure