Efficient simulation techniques for stochastic model checking

D.P. Reijsbergen

Research output: ThesisPhD Thesis - Research UT, graduation UT

Abstract

In this thesis, we focus on methods for speeding-up computer simulations of stochastic models. We are motivated by real-world applications in which corporations formulate service requirements in terms of an upper bound on a probability of failure. If one wants to check whether a complex system model satisfies such a requirement, computer simulation is often the method of choice. We aim to aid engineers during the design phase, so a question of both practical and mathematical relevance is how the runtime of the simulation can be minimised. We focus on two settings in which a speed-up can be achieved. First, when the probability of failure is low, as is typical in a highly reliable system, the time before the first failure is observed can be impractically large. Our research involves importance sampling; we simulate using a different probability measure under which failure is more likely, which typically decreases the variance of the estimator. In order to keep the estimator unbiased, we compensate for the resulting error using the Radon-Nikodym theorem. If done correctly, the gains can be huge. However, if the new probability measure is unsuited for the problem setting the negative consequences can be similarly profound (infinite variance is even possible). In our work, we have extended an importance sampling technique with good performance (i.e., proven to have bounded relative error) that was previously only applicable in restricted settings to a broad model class of stochastic (Markovian) Petri nets. We have also proposed methods to alleviate two well-known problems from the rare event simulation literature: the occurrence of so-called high-probability cycles and the applicability to large time horizons. For the first we use a method based on Dijkstra’s algorithm, for the second we use renewal theory. Second, it often occurs that the number of needed simulation runs is overestimated. As a solution, we use sequential hypothesis testing, which allows us to stop as soon as we can say whether the service requirement is satisfied. This area has seen a lot of recent interest from the model checking community, but some of the techniques used are not always perfectly understood. In our research we have compared the techniques implemented in the most popular model checking tools, identified several common pitfalls and proposed a method that we proved to not have this problem. In particular, we have proposed a new test for which we bounded the probability of an incorrect conclusion using martingale theory.
LanguageUndefined
Awarding Institution
  • University of Twente
Supervisors/Advisors
  • Haverkort, Boudewijn R.H.M., Supervisor
  • Boucherie, Richardus J., Supervisor
  • de Boer, Pieter-Tjerk , Advisor
  • Scheinhardt, Willem R.W., Advisor
Thesis sponsors
Award date6 Dec 2013
Place of PublicationEnschede
Publisher
Print ISBNs978-90-365-3586-1
DOIs
StatePublished - 6 Dec 2013

Keywords

  • EWI-23973
  • METIS-299026
  • IR-88033
  • Importance sampling
  • Dependable systems
  • Rare event simulation
  • Statistical Model Checking

Cite this

Reijsbergen, D. P. (2013). Efficient simulation techniques for stochastic model checking Enschede: Universiteit Twente DOI: 10.3990/1.9789036535861
Reijsbergen, D.P.. / Efficient simulation techniques for stochastic model checking. Enschede : Universiteit Twente, 2013. 175 p.
@phdthesis{26c281930de84de9af6e52d6bd0b03ed,
title = "Efficient simulation techniques for stochastic model checking",
abstract = "In this thesis, we focus on methods for speeding-up computer simulations of stochastic models. We are motivated by real-world applications in which corporations formulate service requirements in terms of an upper bound on a probability of failure. If one wants to check whether a complex system model satisfies such a requirement, computer simulation is often the method of choice. We aim to aid engineers during the design phase, so a question of both practical and mathematical relevance is how the runtime of the simulation can be minimised. We focus on two settings in which a speed-up can be achieved. First, when the probability of failure is low, as is typical in a highly reliable system, the time before the first failure is observed can be impractically large. Our research involves importance sampling; we simulate using a different probability measure under which failure is more likely, which typically decreases the variance of the estimator. In order to keep the estimator unbiased, we compensate for the resulting error using the Radon-Nikodym theorem. If done correctly, the gains can be huge. However, if the new probability measure is unsuited for the problem setting the negative consequences can be similarly profound (infinite variance is even possible). In our work, we have extended an importance sampling technique with good performance (i.e., proven to have bounded relative error) that was previously only applicable in restricted settings to a broad model class of stochastic (Markovian) Petri nets. We have also proposed methods to alleviate two well-known problems from the rare event simulation literature: the occurrence of so-called high-probability cycles and the applicability to large time horizons. For the first we use a method based on Dijkstra’s algorithm, for the second we use renewal theory. Second, it often occurs that the number of needed simulation runs is overestimated. As a solution, we use sequential hypothesis testing, which allows us to stop as soon as we can say whether the service requirement is satisfied. This area has seen a lot of recent interest from the model checking community, but some of the techniques used are not always perfectly understood. In our research we have compared the techniques implemented in the most popular model checking tools, identified several common pitfalls and proposed a method that we proved to not have this problem. In particular, we have proposed a new test for which we bounded the probability of an incorrect conclusion using martingale theory.",
keywords = "EWI-23973, METIS-299026, IR-88033, Importance sampling, Dependable systems, Rare event simulation, Statistical Model Checking",
author = "D.P. Reijsbergen",
year = "2013",
month = "12",
day = "6",
doi = "10.3990/1.9789036535861",
language = "Undefined",
isbn = "978-90-365-3586-1",
publisher = "Universiteit Twente",
school = "University of Twente",

}

Reijsbergen, DP 2013, 'Efficient simulation techniques for stochastic model checking', University of Twente, Enschede. DOI: 10.3990/1.9789036535861

Efficient simulation techniques for stochastic model checking. / Reijsbergen, D.P.

Enschede : Universiteit Twente, 2013. 175 p.

Research output: ThesisPhD Thesis - Research UT, graduation UT

TY - THES

T1 - Efficient simulation techniques for stochastic model checking

AU - Reijsbergen,D.P.

PY - 2013/12/6

Y1 - 2013/12/6

N2 - In this thesis, we focus on methods for speeding-up computer simulations of stochastic models. We are motivated by real-world applications in which corporations formulate service requirements in terms of an upper bound on a probability of failure. If one wants to check whether a complex system model satisfies such a requirement, computer simulation is often the method of choice. We aim to aid engineers during the design phase, so a question of both practical and mathematical relevance is how the runtime of the simulation can be minimised. We focus on two settings in which a speed-up can be achieved. First, when the probability of failure is low, as is typical in a highly reliable system, the time before the first failure is observed can be impractically large. Our research involves importance sampling; we simulate using a different probability measure under which failure is more likely, which typically decreases the variance of the estimator. In order to keep the estimator unbiased, we compensate for the resulting error using the Radon-Nikodym theorem. If done correctly, the gains can be huge. However, if the new probability measure is unsuited for the problem setting the negative consequences can be similarly profound (infinite variance is even possible). In our work, we have extended an importance sampling technique with good performance (i.e., proven to have bounded relative error) that was previously only applicable in restricted settings to a broad model class of stochastic (Markovian) Petri nets. We have also proposed methods to alleviate two well-known problems from the rare event simulation literature: the occurrence of so-called high-probability cycles and the applicability to large time horizons. For the first we use a method based on Dijkstra’s algorithm, for the second we use renewal theory. Second, it often occurs that the number of needed simulation runs is overestimated. As a solution, we use sequential hypothesis testing, which allows us to stop as soon as we can say whether the service requirement is satisfied. This area has seen a lot of recent interest from the model checking community, but some of the techniques used are not always perfectly understood. In our research we have compared the techniques implemented in the most popular model checking tools, identified several common pitfalls and proposed a method that we proved to not have this problem. In particular, we have proposed a new test for which we bounded the probability of an incorrect conclusion using martingale theory.

AB - In this thesis, we focus on methods for speeding-up computer simulations of stochastic models. We are motivated by real-world applications in which corporations formulate service requirements in terms of an upper bound on a probability of failure. If one wants to check whether a complex system model satisfies such a requirement, computer simulation is often the method of choice. We aim to aid engineers during the design phase, so a question of both practical and mathematical relevance is how the runtime of the simulation can be minimised. We focus on two settings in which a speed-up can be achieved. First, when the probability of failure is low, as is typical in a highly reliable system, the time before the first failure is observed can be impractically large. Our research involves importance sampling; we simulate using a different probability measure under which failure is more likely, which typically decreases the variance of the estimator. In order to keep the estimator unbiased, we compensate for the resulting error using the Radon-Nikodym theorem. If done correctly, the gains can be huge. However, if the new probability measure is unsuited for the problem setting the negative consequences can be similarly profound (infinite variance is even possible). In our work, we have extended an importance sampling technique with good performance (i.e., proven to have bounded relative error) that was previously only applicable in restricted settings to a broad model class of stochastic (Markovian) Petri nets. We have also proposed methods to alleviate two well-known problems from the rare event simulation literature: the occurrence of so-called high-probability cycles and the applicability to large time horizons. For the first we use a method based on Dijkstra’s algorithm, for the second we use renewal theory. Second, it often occurs that the number of needed simulation runs is overestimated. As a solution, we use sequential hypothesis testing, which allows us to stop as soon as we can say whether the service requirement is satisfied. This area has seen a lot of recent interest from the model checking community, but some of the techniques used are not always perfectly understood. In our research we have compared the techniques implemented in the most popular model checking tools, identified several common pitfalls and proposed a method that we proved to not have this problem. In particular, we have proposed a new test for which we bounded the probability of an incorrect conclusion using martingale theory.

KW - EWI-23973

KW - METIS-299026

KW - IR-88033

KW - Importance sampling

KW - Dependable systems

KW - Rare event simulation

KW - Statistical Model Checking

U2 - 10.3990/1.9789036535861

DO - 10.3990/1.9789036535861

M3 - PhD Thesis - Research UT, graduation UT

SN - 978-90-365-3586-1

PB - Universiteit Twente

CY - Enschede

ER -

Reijsbergen DP. Efficient simulation techniques for stochastic model checking. Enschede: Universiteit Twente, 2013. 175 p. Available from, DOI: 10.3990/1.9789036535861