Searching for Prototypical Facial Feedback Signals

Dirk K.J. Heylen, E. Bevacqua, M. Tellier, C. Pelachaud

    Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

    16 Citations (Scopus)
    22 Downloads (Pure)


    Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, grounding and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important to get a better idea of the many to many mappings between displays and functions. We asked people to label several dynamic expressions as a probe into this semantic space. We compare simple signals and combined signals in order to find out whether a combination of signals can have a meaning on its own or not, i. e. the meaning of single signals is different from the meaning attached to the combination of these signals. Results show that in some cases a combination of signals alters the perceived meaning of the backchannel.
    Original languageUndefined
    Title of host publicationIntelligent Virtual Agents
    EditorsC. Pelachaud, J-C. Martin, E. André, G. Chollet, D. Pelé
    Place of PublicationBerlin
    Number of pages7
    ISBN (Print)978-3-540-74996-7
    Publication statusPublished - 2007

    Publication series

    NameLecture Notes in Computer Science
    PublisherSpringer Verlag


    • EWI-11839
    • METIS-245987
    • IR-62149
    • HMI-IA: Intelligent Agents

    Cite this