Abstract
Modern software systems operate in complex and changing environments and are exposed to multiple sources of uncertainty. Testing
methods shall be tailored to uncertainty as a first-class concern in order to quantify it and deliver increased confidence
in the level of assurance of the final product. In this paper, we introduce novel model-based exploration strategies that
generate test cases targeting uncertain components of the system under test. Our testing framework leverages Markov Decision
Processes as modeling formalism of choice. The tester explicitly specifies uncertainty by means of beliefs attached to transition
probabilities. The structural properties of the model and the uncertainty specification are then exploited to drive the test
case generation process. Bayesian inference is used to achieve this objective by updating the initial beliefs through the
evidence collected by testing. The proposed uncertainty-aware test selection strategies have been systematically evaluated
on three realistic benchmarks and nine synthetic systems exhibiting up to 10k model transitions. We demonstrate the effectiveness
of the novel strategies with well-established metrics. Results show they outperform existing testing methods with a gain up
to 2.65× in terms of accuracy of the inference process.
[download the pdf file] [DOI]