TY - GEN
T1 - A Case for Automatic System Evaluation
AU - Hauff, C.
AU - Hiemstra, Djoerd
AU - Azzopardi, Leif
AU - de Jong, Franciska M.G.
N1 - 10.1007/978-3-642-12275-0_16
PY - 2010/4
Y1 - 2010/4
N2 - Ranking a set retrieval systems according to their retrieval effectiveness without relying on relevance judgments was first explored by Soboroff et al. [13]. Over the years, a number of alternative approaches have been proposed, all of which have been evaluated on early TREC test collections. In this work, we perform a wider analysis of system ranking estimation methods on sixteen TREC data sets which cover more tasks and corpora than previously. Our analysis reveals that the performance of system ranking estimation approaches varies across topics. This observation motivates the hypothesis that the performance of such methods can be improved by selecting the “right��? subset of topics from a topic set. We show that using topic subsets improves the performance of automatic system ranking methods by 26% on average, with a maximum of 60%. We also observe that the commonly experienced problem of underestimating the performance of the best systems is data set dependent and not inherent to system ranking estimation. These findings support the case for automatic system evaluation and motivate further research.
AB - Ranking a set retrieval systems according to their retrieval effectiveness without relying on relevance judgments was first explored by Soboroff et al. [13]. Over the years, a number of alternative approaches have been proposed, all of which have been evaluated on early TREC test collections. In this work, we perform a wider analysis of system ranking estimation methods on sixteen TREC data sets which cover more tasks and corpora than previously. Our analysis reveals that the performance of system ranking estimation approaches varies across topics. This observation motivates the hypothesis that the performance of such methods can be improved by selecting the “right��? subset of topics from a topic set. We show that using topic subsets improves the performance of automatic system ranking methods by 26% on average, with a maximum of 60%. We also observe that the commonly experienced problem of underestimating the performance of the best systems is data set dependent and not inherent to system ranking estimation. These findings support the case for automatic system evaluation and motivate further research.
KW - IR-70844
KW - METIS-270785
KW - automatic system evaluation
KW - Information Retrieval
KW - EWI-17781
KW - Query performance prediction
U2 - 10.1007/978-3-642-12275-0_16
DO - 10.1007/978-3-642-12275-0_16
M3 - Conference contribution
SN - 978-3-642-12274-3
T3 - Lecture Notes in Computer Science
SP - 153
EP - 165
BT - Advances in Information Retrieval: Proceedings of the 32nd European Conference on IR Research
PB - Springer
CY - London
T2 - 32nd European Conference on IR Research
Y2 - 28 March 2010 through 31 March 2010
ER -