Abstract
New tools for combinatorial test generation are proposed every year. However, different generators may have different performances
on different models, in terms of the number of tests produced and generation time, so the choice of which generator has to
be used can be challenging. Classical comparison between CIT generators considers only the number of tests composing the test
suite. Still, especially when the time dedicated to testing activity is limited, generation time can be determinant. Thus,
we propose a benchmarking framework including 1) a set of generic benchmark models, 2) an interface to easily integrate new
generators, 3) methods to benchmark each generator against the others and to check validity and completeness. We have tested
the proposed environment using five different generators (ACTS, CAgen, CASA, Medici, and PICT), comparing the obtained results
in terms of the number of test cases and generation times, errors, completeness, and validity. Finally, we propose a CIT competition,
between combinatorial generators, based on our framework.
[download the pdf file] [DOI]