Sound black-box checking in the LearnLib

Jeroen Meijer*, Jaco van de Pol

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)
51 Downloads (Pure)


In black-box checking (BBC) incremental hypotheses on the behavior of a system are learned in the form of finite automata, using information from a given set of requirements, specified in Linear-time Temporal Logic (LTL). The LTL formulae are checked on intermediate automata and potential counterexamples are validated on the actual system. Spurious counterexamples are used by the learner to refine these automata. We improve BBC in two directions. First, we improve checking lasso-like counterexamples by assuming a check for state equivalence. This provides a sound method without knowing an upper-bound on the number of states in the system. Second, we propose to check the safety portion of an LTL property first, by deriving simple counterexamples using monitors. We extended LearnLib’s system under learning API to make our methods accessible, using LTSmin as model checker under the hood. We illustrate how LearnLib’s most recent active learning algorithms can be used for BBC in practice. Using the RERS 2017 challenge, we provide experimental results on the performance of all LearnLib’s active learning algorithms when applied in a BBC setting. We will show that the novel incremental algorithms TTT and ADT perform the best. We also provide experiments on the efficiency of various BBC strategies.

Original languageEnglish
Pages (from-to)267-287
Number of pages21
JournalInnovations in systems and software engineering
Issue number3-4
Publication statusPublished - 1 Sep 2019


  • Black-box checking
  • Büchi automata
  • Learn-based testing
  • LearnLib
  • LTL
  • LTSmin
  • Model checking
  • Monitors


Dive into the research topics of 'Sound black-box checking in the LearnLib'. Together they form a unique fingerprint.

Cite this