This document contains a description of experiments for the 2008 Relevance Feedback track. We experiment with different amounts of feedback, including negative relevance feedback. Feedback is implemented using massive weighted query expansion. Parsimonious query expansion using only relevant documents and Jelinek-Mercer smoothing performs best on this relevance feedback track dataset. Additional blind feedback gives better results, except when the blind feedback set is of the same size as the explicit feedback set. On a small number of topics topical feedback is applied, which turns out to be mainly beneficial for early precision.
|Number of pages||7|
|Publication status||Published - Nov 2008|
|Event||Seventeenth Text REtrieval Conference, TREC-17 2008 - Gaithersburg, United States|
Duration: 18 Nov 2008 → 21 Nov 2008
Conference number: 17
|Conference||Seventeenth Text REtrieval Conference, TREC-17 2008|
|Period||18/11/08 → 21/11/08|