Bellman goes Relational

Kristian Kersting, M. van Otterlo, Luc De Raedt

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    66 Citations (Scopus)
    108 Downloads (Pure)

    Abstract

    Motivated by the interest in relational reinforcement learning, we introduce a novel relational Bellman update operator called ReBel. It employs a constraint logic programming language to compactly represent Markov decision processes over relational domains. Using ReBel, a novel value iteration algorithm is developed in which abstraction (over states and actions) plays a major role. This framework provides new insights into relational reinforcement learning. Convergence results as well as experiments are presented.
    Original languageEnglish
    Title of host publicationProceedings of the International Conference on Machine Learning (ICML'04)
    EditorsR. Greiner, D. Schuurmans
    Place of PublicationNew York
    PublisherUniversity of Alberta
    Pages465-472
    Number of pages8
    ISBN (Print)1-58113-8385
    DOIs
    Publication statusPublished - 7 Dec 2004
    Event21st International Conference on Machine Learning, ICML 2004 - Banff, Canada
    Duration: 4 Jul 20048 Jul 2004
    Conference number: 21

    Conference

    Conference21st International Conference on Machine Learning, ICML 2004
    Abbreviated titleICML
    CountryCanada
    CityBanff
    Period4/07/048/07/04

    Keywords

    • EC Grant Agreement nr.: FP6/508861
    • HMI-IA: Intelligent Agents

    Fingerprint Dive into the research topics of 'Bellman goes Relational'. Together they form a unique fingerprint.

  • Cite this

    Kersting, K., van Otterlo, M., & De Raedt, L. (2004). Bellman goes Relational. In R. Greiner, & D. Schuurmans (Eds.), Proceedings of the International Conference on Machine Learning (ICML'04) (pp. 465-472). New York: University of Alberta. https://doi.org/10.1145/1015330.1015401