Model-based testing of probabilistic systems

Marcus Gerhold (Corresponding Author), Mariëlle Stoelinga (Corresponding Author)

    Research output: Contribution to journalArticleAcademicpeer-review

    5 Citations (Scopus)
    48 Downloads (Pure)

    Abstract

    This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.
    Original languageEnglish
    Pages (from-to)77-106
    Number of pages30
    JournalFormal aspects of computing
    Volume30
    Issue number1
    DOIs
    Publication statusPublished - 1 Jan 2018

    Fingerprint

    Model-based Testing
    Testing
    Specifications
    Soundness
    Output
    Completeness
    Correctness
    Statistical methods
    Probabilistic Automata
    Requirements Specification
    Nondeterminism
    Hypothesis Testing
    Statistical method
    Framework
    Divergence
    Sample Size
    Deviation
    Specification
    Internal
    Evaluate

    Keywords

    • UT-Hybrid-D

    Cite this

    @article{60553a84f58144a1a294b49bdd8f41a6,
    title = "Model-based testing of probabilistic systems",
    abstract = "This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.",
    keywords = "UT-Hybrid-D",
    author = "Marcus Gerhold and Mari{\"e}lle Stoelinga",
    year = "2018",
    month = "1",
    day = "1",
    doi = "10.1007/s00165-017-0440-4",
    language = "English",
    volume = "30",
    pages = "77--106",
    journal = "Formal aspects of computing",
    issn = "0934-5043",
    publisher = "Springer",
    number = "1",

    }

    Model-based testing of probabilistic systems. / Gerhold, Marcus (Corresponding Author); Stoelinga, Mariëlle (Corresponding Author).

    In: Formal aspects of computing, Vol. 30, No. 1, 01.01.2018, p. 77-106.

    Research output: Contribution to journalArticleAcademicpeer-review

    TY - JOUR

    T1 - Model-based testing of probabilistic systems

    AU - Gerhold, Marcus

    AU - Stoelinga, Mariëlle

    PY - 2018/1/1

    Y1 - 2018/1/1

    N2 - This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.

    AB - This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies.

    KW - UT-Hybrid-D

    U2 - 10.1007/s00165-017-0440-4

    DO - 10.1007/s00165-017-0440-4

    M3 - Article

    VL - 30

    SP - 77

    EP - 106

    JO - Formal aspects of computing

    JF - Formal aspects of computing

    SN - 0934-5043

    IS - 1

    ER -