### Abstract

Original language | Undefined |
---|---|

Title of host publication | Proceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014) |

Editors | N. Bertrand, L. Bortolussi |

Place of Publication | Rennes |

Publisher | INRIA |

Pages | - |

Number of pages | 4 |

ISBN (Print) | not assigned |

Publication status | Published - Apr 2014 |

### Keywords

- EWI-24693
- IR-91069
- Expected reward
- METIS-304085
- Rewards
- Process Algebra
- Long-run average
- Markov Automata

### Cite this

*Proceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014)*(pp. -). Rennes: INRIA.

}

*Proceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014).*INRIA, Rennes, pp. -.

**Extending Markov Automata with State and Action Rewards.** / Guck, Dennis; Timmer, Mark; Blom, Stefan.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic

TY - GEN

T1 - Extending Markov Automata with State and Action Rewards

AU - Guck, Dennis

AU - Timmer, Mark

AU - Blom, Stefan

PY - 2014/4

Y1 - 2014/4

N2 - This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms.

AB - This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms.

KW - EWI-24693

KW - IR-91069

KW - Expected reward

KW - METIS-304085

KW - Rewards

KW - Process Algebra

KW - Long-run average

KW - Markov Automata

M3 - Conference contribution

SN - not assigned

SP - -

BT - Proceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014)

A2 - Bertrand, N.

A2 - Bortolussi, L.

PB - INRIA

CY - Rennes

ER -