Listeners in face-to-face interactions are not only attending to the communicative signals
being emitted by the speakers, but are sending out signals themselves in the various modalities
that are available to them: facial expressions, gestures, head movements and speech. These
communicative signals, operating in the so-called back-channel, mostly function as feedback
on the actions of the speaker; providing information on the reception of the signals; propelling
the interaction forward, marking understanding, or providing insight into the attitudes and
emotions that the speech gives rise to.
In order to be able to generate appropriate behaviours for a conversational agent in response to
the speech of a human interlocutor we need a better understanding of the kinds of behaviours
displayed, their timing, determinants, and their effects. A major challenge in generating
responsive behaviours, however, is real-time interpretation, as responses in the back-channel
are generally very fast. The solution to this problem has been to rely on surface level cues.
We discuss on-going work on a sensitive artificial listening agent that tries to accomplish this
attentive listening behaviour.
|Name||CTIT Workshop Proceedings|
|Workshop||Workshop on Multimodal Output Generation, MOG 2007|
|Period||25/01/07 → 26/01/07|
- HMI-IA: Intelligent Agents
- Listener responses
- Head movements