Abstract
Relevance Feedback is a common approach for enriching queries, given a set of explicitly or implicitly judged documents to improve the performance of the retrieval. Although it has been shown that on average, the overall performance of retrieval will be improved after relevance feedback, for some topics, employing some relevant documents may decrease the average precision of the initial run. This is mostly because the feedback document is partially relevant and contains off-topic terms which adding them to the query as expansion terms results in loosing the retrieval performance. These relevant documents that hurt the performance of retrieval after feedback are called "poison pills". In this paper, we discuss the effect of poison pills on the relevance feedback and present significant words language models as an approach for estimating feedback model to tackle this problem.
Original language | Undefined |
---|---|
Number of pages | 1 |
Publication status | Published - Nov 2016 |
Event | 15th Dutch-Belgian Information Retrieval Workshop, DIR 2016 - Delft, Netherlands Duration: 25 Nov 2016 → 25 Nov 2016 Conference number: 15 |
Workshop
Workshop | 15th Dutch-Belgian Information Retrieval Workshop, DIR 2016 |
---|---|
Abbreviated title | DIR 2016 |
Country/Territory | Netherlands |
City | Delft |
Period | 25/11/16 → 25/11/16 |
Other | November 25, 2016 |
Keywords
- IR-104509
- EWI-27812