Reinforcement learning helps slam: Learning to build maps

N. Botteghi*, B. Sirmacek, M. Poel, C. Brune

*Corresponding author for this work

Research output: Contribution to journalConference articleAcademicpeer-review

174 Downloads (Pure)


In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.

Original languageEnglish
Pages (from-to)329-336
Number of pages8
JournalInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
Issue numberB4
Publication statusPublished - 6 Aug 2020
EventXXIVth ISPRS Congress 2020 - Virtual Event, Nice, France
Duration: 4 Jul 202010 Jul 2020
Conference number: 24


  • Autonomous Exploration
  • Indoor Environments
  • Reinforcement Learning
  • Simultaneous Localization and Mapping

Fingerprint Dive into the research topics of 'Reinforcement learning helps slam: Learning to build maps'. Together they form a unique fingerprint.

Cite this