Implicit Human-Centered Tagging

A. Vinciarelli, N. Suditu, Maja Pantic

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    13 Citations (Scopus)
    130 Downloads (Pure)

    Abstract

    This paper provides a general introduction to the concept of Implicit Human-Centered Tagging (IHCT) - the automatic extraction of tags from nonverbal behavioral feedback of media users. The main idea behind IHCT is that nonverbal behaviors displayed when interacting with multimedia data (e.g., facial expressions, head nods, etc.) provide information useful for improving the tag sets associated with the data. As such behaviors are displayed naturally and spontaneously, no effort is required from the users, and this is why the resulting tagging process is said to be “implicit��?. Tags obtained through IHCT are expected to be more robust than tags associated with the data explicitly, at least in terms of: generality (they make sense to everybody) and statistical reliability (all tags will be sufficiently represented). The paper discusses these issues in detail and provides an overview of pioneering efforts in the field.
    Original languageUndefined
    Title of host publicationIEEE International Conference on Multimedia and Expo (ICME'09)
    Place of PublicationLos Alamitos
    PublisherIEEE
    Pages1428-1431
    Number of pages4
    ISBN (Print)978-1-4244-4291-1
    DOIs
    Publication statusPublished - 2009
    EventIEEE International Conference on Multimedia and Expo, ICME 2009 - New York, NY, USA
    Duration: 28 Jun 20093 Jul 2009

    Publication series

    Name
    PublisherIEEE Computer Society Press
    ISSN (Print)1945-788X

    Conference

    ConferenceIEEE International Conference on Multimedia and Expo, ICME 2009
    Period28/06/093/07/09
    Other28 June - 3 July 2009

    Keywords

    • METIS-264322
    • IR-69559
    • Nonverbal Behavior Analysis
    • Implicit Tagging
    • HMI-HF: Human Factors
    • EC Grant Agreement nr.: FP7/231287
    • EC Grant Agreement nr.: FP7/203143
    • HMI-MI: MULTIMODAL INTERACTIONS
    • EWI-17193

    Cite this