Embed First, Then Predict

Shenghui Wang, Rob Koopman

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Automatic subject prediction is a desirable feature for modern digital library systems, as manual indexing can no longer cope with the rapid growth of digital collections. It is also desirable to be able to identify a small set of entities (e.g., authors, citations, bibliographic records) which are most relevant to a query. This gets more difficult when the amount of data increases dramatically. Data sparsity and model scalability are the major challenges to solving this type of extreme multi-label classification problem automatically. In this paper, we propose to address this problem in two steps: we first embed different types of entities into the same semantic space, where similarity could be computed easily; second, we propose a novel non-parametric method to identify the most relevant entities in addition to direct semantic similarities. We show how effectively this approach predicts even very specialised subjects, which are associated with few documents in the training set and are more problematic for a classifier.

Original languageEnglish
Pages (from-to)364-370
Number of pages7
JournalKnowledge Organization
Volume46
Issue number5
DOIs
Publication statusPublished - 2019
Externally publishedYes

Keywords

  • Documents
  • Embedding
  • Entities
  • Subjects

Cite this