Iterative perceptual learning for social behavior synthesis

I.A. de Kok, Ronald Walter Poppe, Dirk K.J. Heylen

    Research output: Contribution to journalArticleAcademicpeer-review

    1 Citation (Scopus)
    64 Downloads (Pure)


    We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized behaviors in the context of a conversation. These ratings are used to refine the machine learning models that predict the social signal timings. As the ratings correspond to those moments in the conversation where the production of a specific behavior is inappropriate, we regard features extracted at these moments as negative samples for the training of a classifier. This is an advantage over the traditional corpus-based approach to extract negative samples at random non-positive moments. We perform a comparison between IPL and the traditional corpus-based approach on the timing of backchannels for a listener in speaker–listener dialogs. While both models perform similarly in terms of precision and recall scores, there is a tendency that the backchannels generated with IPL are rated as more appropriate. We additionally investigate the effect of the amount of available training data and the variation of training data on the outcome of the models.
    Original languageUndefined
    Pages (from-to)231-241
    Number of pages11
    JournalJournal on multimodal user interfaces
    Issue number3
    Publication statusPublished - Sept 2014


    • EWI-25062
    • HMI-IA: Intelligent Agents
    • IPL
    • Active learning
    • IR-92421
    • Backchannels
    • METIS-309581

    Cite this