Abstract
Modern software systems are required to operate in a highly uncertain and changing environment. They have to control the satisfaction
         of their requirements at run-time, and possibly adapt and cope with situations that have not been completely addressed at
         design-time. Software engineering methods and techniques are, more than ever, forced to deal with change and uncertainty (lack
         of knowledge) explicitly. For tackling the challenge posed by uncertainty in delivering more reliable systems, this paper
         proposes a novel online Model-based Testing technique that complements classic test case generation based on pseudo-random
         sampling strategies with an uncertainty-aware sampling strategy. To deal with system uncertainty during testing, the proposed
         strategy builds on an Inverse Uncertainty Quantification approach that is related to the discrepancy between the measured
         data at run-time (while the system executes) and a Markov Decision Process model describing the behavior of the system under
         test. To this purpose, a conformance game approach is adopted in which tests feed a Bayesian inference calibrator that continuously
         learns from test data to tune the system model and the system itself. A comparative evaluation between the proposed uncertainty-aware
         sampling policy and classical pseudo-random sampling policies is also presented using the Tele Assistance System running example,
         showing the differences in achieved accuracy and efficiency.
      
[download the pdf file] [DOI]