Abstract
In this research, we investigate the use of Reinforcement Learning (RL) for an effective and robust solution for exploring unknown and indoor environments and reconstructing their maps. We benefit from a Simultaneous Localization and Mapping (SLAM) algorithm for real-time robot localization and mapping. Three different reward functions are compared and tested in different environments with growing complexity. The performances of the three different RL-based path planners are assessed not only on the training environments, but also on an a priori unseen environment to test the generalization properties of the policies. The results indicate that RL-based planners trained to maximize the coverage of the map are able to consistently explore and construct the maps of different indoor environments.
Original language | English |
---|---|
Pages (from-to) | 329-336 |
Number of pages | 8 |
Journal | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Volume | 43 |
Issue number | B4 |
DOIs | |
Publication status | Published - 6 Aug 2020 |
Event | XXIVth ISPRS Congress 2020 - Virtual Event, Nice, France Duration: 4 Jul 2020 → 10 Jul 2020 Conference number: 24 http://www.isprs2020-nice.com |
Keywords
- Autonomous Exploration
- Indoor Environments
- Reinforcement Learning
- Simultaneous Localization and Mapping