Abstract
Original language | English |
---|---|
Place of Publication | Enschede |
Publisher | Centre for Telematics and Information Technology (CTIT) |
Number of pages | 35 |
Publication status | Published - 2014 |
Publication series
Name | CTIT Technical Report Series |
---|---|
Publisher | University of Twente, Centre for Telematics and Information Technology (CTIT) |
No. | TR-CTIT-14-06 |
ISSN (Print) | 1381-3625 |
Fingerprint
Keywords
- EC Grant Agreement nr.: FP7/295261
- MAPA
- Process Algebra
- Quantitative analysis
- EC Grant Agreement nr.: FP7/318490
- Rewards
- Markov Automata
- EWI-24800
- METIS-304117
- IR-91270
- EC Grant Agreement nr.: FP7/2007-2013
Cite this
}
Modelling and analysis of Markov reward automata (extended version). / Guck, Dennis; Timmer, Mark; Hatefi, Hassan; Ruijters, Enno Jozef Johannes; Stoelinga, Mariëlle Ida Antoinette.
Enschede : Centre for Telematics and Information Technology (CTIT), 2014. 35 p. (CTIT Technical Report Series; No. TR-CTIT-14-06).Research output: Book/Report › Report › Professional
TY - BOOK
T1 - Modelling and analysis of Markov reward automata (extended version)
AU - Guck, Dennis
AU - Timmer, Mark
AU - Hatefi, Hassan
AU - Ruijters, Enno Jozef Johannes
AU - Stoelinga, Mariëlle Ida Antoinette
N1 - A revised version of this technical report has been published as a conference paper in ATVA 2014; see http://eprints.eemcs.utwente.nl/25349.
PY - 2014
Y1 - 2014
N2 - Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected accumulative reward until a goal is reached; the expected accumulative reward until a certain time bound; and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies.
AB - Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected accumulative reward until a goal is reached; the expected accumulative reward until a certain time bound; and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies.
KW - EC Grant Agreement nr.: FP7/295261
KW - MAPA
KW - Process Algebra
KW - Quantitative analysis
KW - EC Grant Agreement nr.: FP7/318490
KW - Rewards
KW - Markov Automata
KW - EWI-24800
KW - METIS-304117
KW - IR-91270
KW - EC Grant Agreement nr.: FP7/2007-2013
M3 - Report
T3 - CTIT Technical Report Series
BT - Modelling and analysis of Markov reward automata (extended version)
PB - Centre for Telematics and Information Technology (CTIT)
CY - Enschede
ER -