Abstract
In information retrieval (IR), research aiming to reduce the cost of retrieval system evaluations has been conducted along two lines: (i) the evaluation of IR systems with reduced amounts of manual relevance assessments, and (ii) the fully automatic evaluation of IR systems, thus foregoing the need for manual assessments altogether. The proposed methods in both areas are commonly evaluated by comparing their performance estimates for a set of systems to a ground truth (provided for instance by evaluating the set of systems according to mean average precision). In contrast, in this poster we compare an automatic system evaluation approach directly to two evaluations based on incomplete manual relevance assessments. For the particular case of TREC's Million Query track, we show that the automatic evaluation leads to results which are highly correlated to those achieved by approaches relying on incomplete manual judgments.
Original language | Undefined |
---|---|
Title of host publication | Proceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 863-864 |
Number of pages | 2 |
ISBN (Print) | 978-1-4503-0153-4 |
DOIs | |
Publication status | Published - Jul 2010 |
Event | 33rd Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010 - Geneva, Switzerland Duration: 19 Jul 2010 → 23 Jul 2010 Conference number: 33 |
Publication series
Name | |
---|---|
Publisher | ACM |
Conference
Conference | 33rd Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010 |
---|---|
Abbreviated title | SIGIR |
Country/Territory | Switzerland |
City | Geneva |
Period | 19/07/10 → 23/07/10 |
Keywords
- IR-72484
- METIS-270945
- CR-H.3
- Evaluation
- EWI-18226
- Information Retrieval