Reinforcement Learning for Relational MDPs

M. van Otterlo

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    49 Downloads (Pure)

    Abstract

    In this paper we present a new method for reinforcement learning in relational domains. A logical language is employed to abstract over states and actions, thereby decreasing the size of the state-action space significantly. A probabilistic transition model of the abstracted Markov-Decision-Process is estimated to to speed-up learning. We present theoretical and experimental analysis of our new representation. Some insights concerning the problems and opportunities of logical representations for reinforcement learning are obtained in the context of a growing interest in the use of abstraction in reinforcement learning contexts.
    Original languageEnglish
    Title of host publicationMachine Learning Conference of Belgium and the Netherlands (BeNeLearn'04)
    EditorsA. Nowe, T. Lenaerts, K. Steenhaut
    Place of PublicationBrussels
    PublisherVrije Universiteit Brussel
    Pages138-145
    Number of pages8
    Publication statusPublished - 8 Jan 2004
    Event13th Machine Learning Conference of Belgium and the Netherlands, BeNeLearn 2004 - Brussels, Belgium
    Duration: 8 Jan 20049 Jan 2004
    Conference number: 13

    Conference

    Conference13th Machine Learning Conference of Belgium and the Netherlands, BeNeLearn 2004
    Abbreviated titleBenelearn 2004
    CityBrussels, Belgium
    Period8/01/049/01/04

    Keywords

    • HMI-IA: Intelligent Agents
    • Markov decision process (MDP)

    Fingerprint Dive into the research topics of 'Reinforcement Learning for Relational MDPs'. Together they form a unique fingerprint.

    Cite this