Abstract
Model checkers for concurrent probabilistic systems have become very popular within the last decade. The study of long-run average behavior has however received only scant attention in this area, at least from the implementation perspective. This paper studies the problem of how to efficiently realize an algorithm for computing optimal long-run average reward values for concurrent probabilistic systems. At its core is a variation of Howard and Veinott's algorithm for Markov decision processes, where symbolic and non-symbolic representations are intertwined in an effective manner: the state space is represented using binary decision diagrams, while the linear equation systems which have to be solved for the induced Markov chains to improve the current scheduler are solved using an explicit state representation. In order to keep the latter small, we apply a symbolic bisimulation minimization algorithm to the induced Markov chain. The scheduler improvement step itself is again performed on symbolic data structures. Practical evidence shows that the implementation is effective, and sometimes uses considerably less memory than a fully explicit implementation.
Original language | English |
---|---|
Title of host publication | 2010 Seventh International Conference on the Quantitative Evaluation of Systems |
Place of Publication | Piscataway, NJ |
Publisher | IEEE |
Number of pages | 10 |
ISBN (Print) | 978-1-4244-8082-1 |
DOIs | |
Publication status | Published - 2010 |
Externally published | Yes |
Event | 7th International Conference on the Quantitative Evaluation of Systems, QEST 2010 - Williamsburg, United States Duration: 15 Sep 2010 → 18 Sep 2010 Conference number: 7 |
Conference
Conference | 7th International Conference on the Quantitative Evaluation of Systems, QEST 2010 |
---|---|
Abbreviated title | QEST 2010 |
Country/Territory | United States |
City | Williamsburg |
Period | 15/09/10 → 18/09/10 |