Abstract

The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time.
Original languageUndefined
Awarding Institution
  • University of Twente
Supervisors/Advisors
  • van de Pol, Jan Cornelis, Supervisor
  • Rensink, Arend , Supervisor
Date of Award18 Sep 2014
Place of PublicationEnschede
Publisher
Print ISBNs978-90-365-3707-0
DOIs
StatePublished - 18 Sep 2014

Fingerprint

Testing
Engines
Visualization
Education

Keywords

  • EWI-25083
  • TorX
  • JTorX
  • IOCO
  • Model-Based Testing
  • IR-91781
  • METIS-305024

Cite this

Belinfante, A. (2014). JTorX: Exploring Model-Based Testing Enschede: Centre for Telematics and Information Technology (CTIT) DOI: 10.3990/1.9789036537070
Belinfante, Axel. / JTorX: Exploring Model-Based Testing. Enschede : Centre for Telematics and Information Technology (CTIT), 2014. 324 p.
@misc{56b5a939cd9040b5a4a7d11b92548959,
title = "JTorX: Exploring Model-Based Testing",
abstract = "The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time.",
keywords = "EWI-25083, TorX, JTorX, IOCO, Model-Based Testing, IR-91781, METIS-305024",
author = "Axel Belinfante",
note = "IPA Dissertation series no. 2014-09",
year = "2014",
month = "9",
doi = "10.3990/1.9789036537070",
isbn = "978-90-365-3707-0",
publisher = "Centre for Telematics and Information Technology (CTIT)",
address = "Netherlands",
school = "University of Twente",

}

Belinfante, A 2014, 'JTorX: Exploring Model-Based Testing', University of Twente, Enschede. DOI: 10.3990/1.9789036537070

JTorX: Exploring Model-Based Testing. / Belinfante, Axel.

Enschede : Centre for Telematics and Information Technology (CTIT), 2014. 324 p.

Research output: ScientificPhD Thesis - Research UT, graduation UT

TY - THES

T1 - JTorX: Exploring Model-Based Testing

AU - Belinfante,Axel

N1 - IPA Dissertation series no. 2014-09

PY - 2014/9/18

Y1 - 2014/9/18

N2 - The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time.

AB - The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time.

KW - EWI-25083

KW - TorX

KW - JTorX

KW - IOCO

KW - Model-Based Testing

KW - IR-91781

KW - METIS-305024

U2 - 10.3990/1.9789036537070

DO - 10.3990/1.9789036537070

M3 - PhD Thesis - Research UT, graduation UT

SN - 978-90-365-3707-0

PB - Centre for Telematics and Information Technology (CTIT)

ER -

Belinfante A. JTorX: Exploring Model-Based Testing. Enschede: Centre for Telematics and Information Technology (CTIT), 2014. 324 p. Available from, DOI: 10.3990/1.9789036537070