Robotics deep reinforcement learning with loose prior knowledge

Nicolò Botteghi

Research output: ThesisPhD Thesis - Research UT, graduation UT

834 Downloads (Pure)

Abstract

Robotics research has tremendously progressed in the last decade, and robots can now be programmed and automated for solving different tasks: from everyday life simplest jobs, e.g. vacuum cleaning, or grass-cutting, to complex industrial applications, e.g. car assembly, smart warehouse, or inspection of plants. However, many steps have yet to be taken to achieve high degrees of cognitive and motoric intelligence and autonomy. These automated solutions require a vast amount of prior knowledge of the tasks and are often brittle in all the scenarios in which the robots have an imperfect and limited perception, inaccurate models of the world and uncertainties in the motion.

Learning from interaction is the simplest yet strongest learning approach that every living being experiences throughout its life. Most of what humans and animals learn is built on the concept of iteratively acting and improving the behaviour based on the consequences of the actions taken. When designing artificial brains for autonomous robots, what inspires us is the idea of simply letting the robot's brain autonomously learn how to act, i.e. control the robot, in the best way for solving a given task. Unfortunately, artificial brains do not learn as fast as living beings' brains, and even solving simple problems requires thousands or millions of unsuccessful trials.

In this thesis, we focus on the study of the methods that tackle the perceiving-reasoning-acting chain with the general learning approach of learning from interaction for solving different robotics tasks, such as navigation, path planning and exploration. This paradigm of learning is commonly known as Reinforcement Learning. In particular, we study how representations of the information perceived by the robot and of the decisions of the artificial mind can be learned and how these representations can be used to simplify reasoning and acting in Reinforcement Learning. Moreover, we study how to reward the artificial brain to achieve better behaviours and how this translates to different robotics tasks.

We show that loose forms of prior knowledge are the ones to employ in robotics because they do not limit nor constrain the search of the optimal behaviour but instead improve the agent's ability to adapt to different and unseen-a-priori situations, i.e. the generalisation, the ability to learn fast with limited experience, i.e. the sample efficiency, and the ability to behave well in the presence of uncertainties, i.e. the robustness of the learned behaviours.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • University of Twente
Supervisors/Advisors
  • Stramigioli, Stefano, Supervisor
  • Brune, Christoph, Supervisor
  • Poel, Mannes, Supervisor
Award date6 Oct 2021
Place of PublicationEnschede
Publisher
Print ISBNs978-90-365-5216-5
DOIs
Publication statusPublished - 6 Oct 2021

Fingerprint

Dive into the research topics of 'Robotics deep reinforcement learning with loose prior knowledge'. Together they form a unique fingerprint.

Cite this