On the Evaluation of Snippet Selection for WebCLEF

A. Overwijk, D. Nguyen, Dong-Phuong Nguyen, C. Hauff, Rudolf Berend Trieschnigg, Djoerd Hiemstra, Franciska M.G. de Jong

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)
24 Downloads (Pure)

Abstract

WebCLEF is about supporting a user who is an expert in writing a survey article on a specific topic with a clear goal and audience by generating a ranked list with relevant snippets. This paper focuses on the evaluation methodology of WebCLEF. We show that the evaluation method and test set used for WebCLEF 2007 cannot be used to evaluate new systems and give recommendations how to improve the evaluation.
Original languageEnglish
Title of host publicationEvaluating Systems for Multilingual and Multimodal Information Access
Subtitle of host publication9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008, Aarhus, Denmark, September 17-19, 2008, Revised Selected Papers
EditorsCarol Peters, Thomas Deselaers, Nicola Ferro, Julio Gonzalo
Place of PublicationBerlin, Heidelberg
PublisherSpringer
Pages794-797
Number of pages4
ISBN (Electronic)978-3-642-04447-2
ISBN (Print)978-3-642-04446-5
DOIs
Publication statusPublished - 2009
Event9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008 - Aarhus, Denmark
Duration: 17 Sep 200819 Sep 2008
Conference number: 9

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume5706
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Workshop

Workshop9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008
Abbreviated titleCLEF
Country/TerritoryDenmark
CityAarhus
Period17/09/0819/09/08

Keywords

  • Measurement
  • Performance
  • Experimentation

Cite this