Learning preferences for Referring Expression Generation: Effects of domain, language and algorithm

Ruud Koolen, Emiel Krahmer, Mariet Theune

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    11 Citations (Scopus)
    26 Downloads (Pure)


    One important subtask of Referring Expression Generation (REG) algorithms is to select the attributes in a definite description for a given object. In this paper, we study how much training data is required for algorithms to do this properly. We compare two REG algorithms in terms of their performance: the classic Incremental Algorithm and the more recent Graph algorithm. Both rely on a notion of preferred attributes that can be learned from human descriptions. In our experiments, preferences are learned from training sets that vary in size, in two domains and languages. The results show that depending on the algorithm and the complexity of the domain, training on a handful of descriptions can already lead to a performance that is not significantly different from training on a much larger data set.
    Original languageUndefined
    Title of host publicationProceedings of the Seventh International Natural Language Generation Conference (INLG 2012)
    Place of PublicationStroudsburg, PA, USA
    PublisherAssociation for Computational Linguistics (ACL)
    Number of pages9
    ISBN (Print)978-1-937284-23-7
    Publication statusPublished - 30 May 2012
    EventSeventh International Natural Language Generation Conference, INLG 2012 - Starved Rock, IL, USA
    Duration: 30 May 20121 Jun 2012

    Publication series

    PublisherThe Association for Computational Linguistics


    ConferenceSeventh International Natural Language Generation Conference, INLG 2012
    Other30 May - 1 June 2012


    • EWI-22510
    • METIS-293188
    • IR-83408

    Cite this