Unsupervised Representation Learning in Deep Reinforcement Learning: A Review

  • Nicolò Botteghi*
  • , Mannes Poel
  • , Christoph Brune
  • *Corresponding author for this work

Research output: Contribution to journalReview articleAcademicpeer-review

7 Citations (Scopus)
27 Downloads (Pure)

Abstract

This review article addresses the problem of learning abstract representations of measurement data in the context of deep reinforcement learning. While the data are often ambiguous, high-dimensional, and complex to interpret, many dynamical systems can be effectively described by a low-dimensional set of state variables. Discovering these state variables from the data is a crucial aspect for 1) improving the data efficiency, robustness, and generalization of DRL methods; 2) tackling the curse of dimensionality; and 3) bringing interpretability and insights into black-box DRL. This review provides a comprehensive and complete overview of unsupervised representation learning in DRL by describing the main DL tools used for learning representations of the world, providing a systematic view of the method and principles; summarizing applications, benchmarks, and evaluation strategies; and discussing open challenges and future directions.

Original languageEnglish
Pages (from-to)26-68
Number of pages43
JournalIEEE Control Systems
Volume45
Issue number2
DOIs
Publication statusPublished - 2025

Keywords

  • 2025 OA procedure

Fingerprint

Dive into the research topics of 'Unsupervised Representation Learning in Deep Reinforcement Learning: A Review'. Together they form a unique fingerprint.

Cite this