Abstract
In the design of probabilistic timed systems, requirements concerning behaviour that occurs within a given time or energy budget are of central importance. We observe that model-checking such requirements for probabilistic timed automata can be reduced to checking reward-bounded properties on Markov decision processes. This is traditionally implemented by unfolding the model according to the bound, or by solving a sequence of linear programs. Neither scales well to large models. Using value iteration in place of linear programming achieves scalability but accumulates approximation error. In this paper, we correct the value iteration-based scheme, present two new approaches based on scheduler enumeration and state elimination, and compare the practical performance and scalability of all techniques on a number of case studies from the literature. We show that state elimination can significantly reduce runtime for large models or high bounds.
Original language | English |
---|---|
Title of host publication | Dependable Software Engineering: Theories, Tools, and Applications |
Subtitle of host publication | Second International Symposium, SETTA 2016, Beijing, China, November 9-11, 2016, Proceedings |
Editors | Martin Fränzle, Deepak Kapur |
Place of Publication | Cham |
Publisher | Springer |
Pages | 85-100 |
Number of pages | 16 |
ISBN (Electronic) | 978-3-319-47677-3 |
ISBN (Print) | 978-3-319-47676-6 |
DOIs | |
Publication status | Published - 2016 |
Event | 2nd International Symposium on Dependable Software Engineering: Theories, Tools, and Applications, SETTA 2016 - Beijing, China Duration: 9 Nov 2016 → 11 Nov 2016 Conference number: 2 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
Volume | 9984 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 2nd International Symposium on Dependable Software Engineering: Theories, Tools, and Applications, SETTA 2016 |
---|---|
Abbreviated title | SETTA |
Country/Territory | China |
City | Beijing |
Period | 9/11/16 → 11/11/16 |
Keywords
- Model check
- Goal state
- Markov decision process (MDP)
- Rate reward
- State elimination