Reinforcement learning helps slam: Learning to build maps

N. Botteghi*, B. Sirmacek, M. Poel, C. Brune

*Corresponding author for this work

Research output: Contribution to journalConference articleAcademicpeer-review

19 Downloads (Pure)

Abstract

In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.

Original languageEnglish
Pages (from-to)329-336
Number of pages8
JournalInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
Volume43
Issue numberB4
DOIs
Publication statusPublished - 6 Aug 2020
Event24th ISPRS Congress 2020 - Virtual Conference
Duration: 31 Aug 20202 Sep 2020
Conference number: 24

Keywords

  • Autonomous Exploration
  • Indoor Environments
  • Reinforcement Learning
  • Simultaneous Localization and Mapping

Fingerprint Dive into the research topics of 'Reinforcement learning helps slam: Learning to build maps'. Together they form a unique fingerprint.

  • Cite this