Extending Markov Automata with State and Action Rewards

Dennis Guck, Mark Timmer, Stefan Blom

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademic

    62 Downloads (Pure)

    Abstract

    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms.
    Original languageUndefined
    Title of host publicationProceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014)
    EditorsN. Bertrand, L. Bortolussi
    Place of PublicationRennes
    PublisherINRIA
    Pages-
    Number of pages4
    ISBN (Print)not assigned
    Publication statusPublished - Apr 2014
    Event12th International Workshop on Quantitative Aspects of Programming Languages and Systems, QAPL 2014 - Grenoble, France
    Duration: 12 Apr 201413 Apr 2014
    Conference number: 12

    Conference

    Conference12th International Workshop on Quantitative Aspects of Programming Languages and Systems, QAPL 2014
    Abbreviated titleQAPL
    Country/TerritoryFrance
    CityGrenoble
    Period12/04/1413/04/14

    Keywords

    • EWI-24693
    • IR-91069
    • Expected reward
    • METIS-304085
    • Rewards
    • Process Algebra
    • Long-run average
    • Markov Automata

    Cite this