Model-based testing of probabilistic systems

Marcus Gerhold, Mariëlle Stoelinga

Research output: Contribution to journalArticle

  • 1 Citations

Abstract

This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.
LanguageEnglish
Pages77-106
JournalFormal aspects of computing
Volume30
Issue number1
DOIs
StatePublished - 22 Jan 2018

Fingerprint

Model-based Testing
Testing
Specifications
Soundness
Output
Completeness
Correctness
Statistical methods
Probabilistic Automata
Requirements Specification
Nondeterminism
Hypothesis Testing
Statistical method
Framework
Divergence
Sample Size
Deviation
Specification
Internal
Evaluate

Cite this

@article{60553a84f58144a1a294b49bdd8f41a6,
title = "Model-based testing of probabilistic systems",
abstract = "This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.",
author = "Marcus Gerhold and Mari{\"e}lle Stoelinga",
year = "2018",
month = "1",
day = "22",
doi = "10.1007/s00165-017-0440-4",
language = "English",
volume = "30",
pages = "77--106",
journal = "Formal aspects of computing",
issn = "0934-5043",
publisher = "Springer London",
number = "1",

}

Model-based testing of probabilistic systems. / Gerhold, Marcus; Stoelinga, Mariëlle.

In: Formal aspects of computing, Vol. 30, No. 1, 22.01.2018, p. 77-106.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Model-based testing of probabilistic systems

AU - Gerhold,Marcus

AU - Stoelinga,Mariëlle

PY - 2018/1/22

Y1 - 2018/1/22

N2 - This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.

AB - This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.

U2 - 10.1007/s00165-017-0440-4

DO - 10.1007/s00165-017-0440-4

M3 - Article

VL - 30

SP - 77

EP - 106

JO - Formal aspects of computing

T2 - Formal aspects of computing

JF - Formal aspects of computing

SN - 0934-5043

IS - 1

ER -