Optimal decision-making under stochastic uncertainty is a core problem tackled in artificial intelligence/machine learning (AI), planning, and verification. Planning and AI methods aim to find good or optimal strategies to maximise rewards or the probability of reaching a goal. Verification approaches focus on calculating the probability or reward, obtaining the strategy as a side effect. In this paper, we connect three strands of work on obtaining strategies implemented in the context of the Modest Toolset: statistical model checking with either lightweight scheduler sampling or deep learning, and probabilistic model checking. We compare their different goals and abilities, and show newly extended experiments on Racetrack benchmarks that highlight the tradeoffs between the methods. We conclude with an outlook on improving the existing approaches and on generalisations to continuous models, and emphasise the need for further tool development to integrate methods that find, evaluate, compare, and explain strategies.
|Title of host publication
|Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning
|Subtitle of host publication
|11th International Symposium, ISoLA 2022, Rhodes, Greece, October 22-30, 2022, Proceedings, Part III
|Tiziana Margaria, Bernhard Steffen
|Place of Publication
|Number of pages
|Published - 17 Oct 2022
|Lecture Notes in Computer Science
- This work was part of the MISSION (Models in Space Systems: Integration, Operation, and Networking) project, funded by the European Union’s Horizon 2020 research and innovation programme under Marie Skłodowska-Curie Actions grant number 101008233.
- 2023 OA procedure