Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics

Nicolò Botteghi, Mannes Poel, Beril Sirmacek, Christoph Brune

Research output: Working paperPreprintAcademic

23 Downloads (Pure)

Abstract

Deep Reinforcement Learning has shown its ability in solving complicated problems directly from high-dimensional observations. However, in end-to-end settings, Reinforcement Learning algorithms are not sample-efficient and requires long training times and quantities of data. In this work, we proposed a framework for sample-efficient Reinforcement Learning that take advantage of state and action representations to transform a high-dimensional problem into a low-dimensional one. Moreover, we seek to find the optimal policy mapping latent states to latent actions. Because now the policy is learned on abstract representations, we enforce, using auxiliary loss functions, the lifting of such policy to the original problem domain. Results show that the novel framework can efficiently learn low-dimensional and interpretable state and action representations and the optimal latent policy.
Original languageEnglish
PublisherArXiv.org
Number of pages19
Publication statusPublished - 4 Jul 2021

Keywords

  • cs.LG
  • cs.AI

Fingerprint

Dive into the research topics of 'Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics'. Together they form a unique fingerprint.

Cite this