Relying on topic subsets for system ranking estimation

C. Hauff, Djoerd Hiemstra, Franciska M.G. de Jong, Leif Azzopardi

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

10 Citations (Scopus)

Abstract

Ranking a number of retrieval systems according to their retrieval effectiveness without relying on costly relevance judgments was first explored by Soboroff et al [6]. Over the years, a number of alternative approaches have been proposed. We perform a comprehensive analysis of system ranking estimation approaches on a wide variety of TREC test collections and topics sets. Our analysis reveals that the performance of such approaches is highly dependent upon the topic or topic subset, used for estimation. We hypothesize that the performance of system ranking estimation approaches can be improved by selecting the "right" subset of topics and show that using topic subsets improves the performance by 32% on average, with a maximum improvement of up to 70% in some cases.
Original languageUndefined
Title of host publicationProceeding of the 18th ACM conference on Information and knowledge management
Place of PublicationNew York
PublisherAssociation for Computing Machinery (ACM)
Pages1859-1862
Number of pages4
ISBN (Print)978-1-60558-512-3
DOIs
Publication statusPublished - 2009
EventProceeding of the 18th ACM conference on Information and knowledge management, Hong Kong, China: Proceeding of the 18th ACM conference on Information and knowledge management - New York
Duration: 1 Jan 2009 → …

Publication series

Name
PublisherACM

Conference

ConferenceProceeding of the 18th ACM conference on Information and knowledge management, Hong Kong, China
CityNew York
Period1/01/09 → …

Keywords

  • METIS-265255
  • IR-69482
  • EWI-17152
  • Information Retrieval
  • Query performance prediction

Cite this