Omega-Regular Decision Processes

Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, Dominik Wojtczak

Research output: Contribution to journalConference articleAcademicpeer-review

43 Downloads (Pure)

Abstract

Regular decision processes (RDPs) are a subclass of non-Markovian decision processes where the transition and reward functions are guarded by some regular property of the past (a lookback). While RDPs enable intuitive and succinct representation of non-Markovian decision processes, their expressive power coincides with finite-state Markov decision processes (MDPs). We introduce omega-regular decision processes (ODPs) where the non-Markovian aspect of the transition and reward functions are extended to an ω-regular lookahead over the system evolution. Semantically, these lookaheads can be considered as promises made by the decision maker or the learning agent about her future behavior. In particular, we assume that if the promised lookaheads are not fulfilled, then the decision maker receives a payoff of (the least desirable payoff), overriding any rewards collected by the decision maker. We enable optimization and learning for ODPs under the discounted-reward objective by reducing them to lexicographic optimization and learning over finite MDPs. We present experimental results demonstrating the effectiveness of the proposed reduction.

Original languageEnglish
Pages (from-to)21125-21133
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number19
DOIs
Publication statusPublished - 24 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024
Conference number: 38

Keywords

  • 2024 OA procedure

Fingerprint

Dive into the research topics of 'Omega-Regular Decision Processes'. Together they form a unique fingerprint.

Cite this