Abstract
In real-world search settings, learning to rank (LtR) models are trained and tuned repeatedly using large amounts of data, thus consuming significant time and computing resources, and raising efficiency and sustainability concerns. One way to address these concerns is to reduce the size of training datasets. Dataset sampling and distillation are two classes of method introduced to enable a significant reduction in dataset size, while achieving comparable performance to training with complete data. In this work, we perform a comparative analysis of dataset distillation and sampling methods in the context of LtR. We evaluate gradient matching and distribution matching dataset distillation approaches - shown to be effective in computer vision - and show how these algorithms can be adjusted for the LtR task. Our empirical analysis, using three LtR datasets, indicates that, in contrast to previous studies in computer vision, the selected distillation methods do not outperform random sampling. Our code and experimental settings are released alongside the paper.
Original language | English |
---|---|
Title of host publication | ICTIR 2024 - Proceedings of the 2024 ACM SIGIR International Conference on the Theory of Information Retrieval |
Publisher | Association for Computing Machinery |
Pages | 51-60 |
Number of pages | 10 |
ISBN (Electronic) | 9798400706813 |
DOIs | |
Publication status | Published - 5 Aug 2024 |
Event | 10th ACM SIGIR International Conference on the Theory of Information Retrieval, ICTIR 2024 - Washington, United States Duration: 13 Jul 2024 → 13 Jul 2024 Conference number: 10 |
Conference
Conference | 10th ACM SIGIR International Conference on the Theory of Information Retrieval, ICTIR 2024 |
---|---|
Abbreviated title | ICTIR 2024 |
Country/Territory | United States |
City | Washington |
Period | 13/07/24 → 13/07/24 |
Other | Co-located with ACM SIGIR 2024 |
Keywords
- dataset distillation
- learning-to-rank
- sampling