University of Twente, The Netherlands
CADP (Construction and Analysis of Distributed Processes)
532 PNML models
State space methods provide an automated way of verifying models.
However, these methods suffer from the state space explosion problem.
Many methods have been developed to cope with this problem. As a
result, a user has many different strategies to apply state space
methods to models. Because there is such a large amount of different
strategies, it is hard for a user to select an appropriate strategy.
If a bad strategy is selected, the model may be unsolvable, or waste
resources. Additionally, the necessary intervention of a user makes the
process of validation less automated. Therefore, it would be useful
if the model checker itself is able to select an appropriate strategy.
To allow the model checker to predict a suitable strategy, it needs to use information present in the model itself. This research investigates to what extent the characteristics of Petri Net models can be used to predict an appropriate strategy. The study considered 532 different Petri net models represented in PNML. The performance of each model, for 20 selected strategies, was determined using LTSMin. Then, to collect its necessary features, each model was first converted from the PNML to the NUPN format by applying the PNML2NUPN tool, and then was fed to the CAESAR.BDD tool of CADP. Once this process was finished, the performance data of every model for each strategy was combined with the feature set collected by CAESAR.BDD. This resulted in 20 datasets, each one containing the name of a model, the features collected for that model, and the average exploration time for a particular strategy.
This data was used to create a regressor for every strategy, which predicts the expected runtime when that strategy is applied to a given model. These regressors were then combined into a single classifier which could predict an appropriate strategy. The classifier was compared to the best single strategy, and was shown to predict a strategy which outperforms this fixed strategy in 17% of the predictions, and results in an average time loss of 12.95 seconds.
The performance of the classifier can be improved by increasing the
accuracy of its component regressors. Since some of the regressors
perform quite well (with scores of 0.95), the lack of performance
may stem from a lack of data rather than a lack of features. Therefore,
the first step to improving this classifier would be to collect more
data, for instance by allowing a greater time limit for the model
checking runs on each PNML model. In addition, this data would be
way more varied, since the longer exploration times of big models
would also be included.
"Can Machine Learning Automatically Choose your Best Model Checking
BSc Thesis, EEMCS: Electrical Engineering, Mathematics and Computer
Science, University of Twente, 2019.
Available on-line at: https://essay.utwente.nl/77600/1/Hendriks_BA_EEMCS.pdf
or from our FTP site in PDF or PostScript
University of Twente
Faculty of Electrical Engneering, Mathematics and Computer Science
7522 NB Enschede
|Further remarks:||This case-study, amongst others, is described on the CADP Web site: http://cadp.inria.fr/case-studies|