Emotion Recognition based on Multimodal Information

Zhihong Zeng, Maja Pantic, Thomas S. Huang

    Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

    21 Citations (Scopus)


    Here is a conversation between an interviewer and a subject occurring in an Adult Attachment Interview (Roisman, Tsai, & Chiang, 2004). AUs are facial action units defined in Ekman, Friesen, and Hager (2002). The interviewer asked: “Now, let you choose five adjective words to describe your childhood relationship with your mother when you were about five years old, or as far back as you remember.��? The subject kept smiling (lip corner raiser AU12) when listening. After the interviewer finished the question, the subject looked around and lowered down her head (AU 54) and eyes (AU 64). Then she lowered and drew together the eyebrows (AU4) so that severe vertical wrinkles and skin bunching between the eyebrows appeared. Then her left lip raise[d] (Left AU10), and finger scratched chin. After about 50 second silence, the subject raise her head (AU53) and brow (AU1+AU2), and asked with a smile (AU12): “Should I . . . give what I have now?��? The interviewer response with smiling (AU12): “I guess, those will be when you were five years old. Can you remember?��? The subject answered with finger touching chin: “Yeap. Ok. Happy (smile, AU6+AU12), content, dependent, (silence, then lower her voice) what is next (silent, AU4+left AU 10), honest, (silent, AU 4), innocent.��?
    Original languageUndefined
    Title of host publicationAffective Information Processing
    EditorsT. Tan, J. Tao
    Place of PublicationLondon
    Number of pages26
    ISBN (Print)978-1-84800-305-7
    Publication statusPublished - 2009

    Publication series

    PublisherSpringer Verlag


    • IR-69473
    • METIS-264299
    • EWI-17120
    • HMI-HF: Human Factors
    • EC Grant Agreement nr.: FP6/0027787
    • EC Grant Agreement nr.: FP7/211486

    Cite this