Abstract
Autonomously exploring and mapping is one of the open challenges of robotics and artificial intelligence. Especially when the environments are unknown, choosing the optimal navigation directive is not straightforward. In this paper, we propose a reinforcement learning framework for navigating, exploring, and mapping unknown environments. The reinforcement learning agent is in charge of selecting the commands for steering the mobile robot, while a SLAM algorithm estimates the robot pose and maps the environments. The agent, to select optimal actions, is trained to be curious about the world. This concept translates into the introduction of a curiosity-driven reward function that encourages the agent to steer the mobile robot towards unknown and unseen areas of the world and the map. We test our approach in explorations challenges in different indoor environments. The agent trained with the proposed reward function outperforms the agents trained with reward functions commonly used in the literature for solving such tasks.
Original language | English |
---|---|
Pages (from-to) | 129-136 |
Number of pages | 8 |
Journal | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Volume | 5 |
Issue number | 1 |
DOIs | |
Publication status | Published - 17 Jun 2021 |
Event | 24th ISPRS Congress "Imaging Today, Foreseeing Tomorrow", Commission I 2021 - Virtual Event, Nice Virtual, France Duration: 5 Jul 2021 → 9 Jul 2021 Conference number: 24 https://www.isprs2020-nice.com/ |
Keywords
- Indoor Mapping
- Mobile Robotics
- Reinforcement Learning
- Simultaneous Localization and Mapping
- UT-Gold-D