Model-based testing of probabilistic systems

Marcus Gerhold (Corresponding Author), Mariëlle Stoelinga (Corresponding Author)

    Research output: Contribution to journalArticleAcademicpeer-review

    12 Citations (Scopus)
    177 Downloads (Pure)


    This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.
    Original languageEnglish
    Pages (from-to)77-106
    Number of pages30
    JournalFormal aspects of computing
    Issue number1
    Publication statusPublished - 1 Jan 2018


    • UT-Hybrid-D

    Cite this