fond
PN logo
Petri Nets web pages
Petri Nets Benchmarks

Introduction

This page reports benchmarks that are available online. If you are aware of some missing benchmark, please report this to us at pnweb@petrinet.net.

Below are the public benchmarks brought to our knowledge.

The Model Checking Contest Benchmark Suite

This benchmark suite is provided by the People involved in the organisation of the Model Checking Contest since 2011. These models have a large variety of authors who are referenced.

Models in the Model Checking Contest Benchmark Suite

The benchmark proposed for the Model Checking Contest (MCC) consists of P/T nets (1-bounded, k-bounded, unbounded), and coloured Nets (1 and k-bounded per colour, with and without non-equal guards on transitions, with and without cartesian product on colours, with and without successor/predecessor functions).

For a number of P/T nets, additional information about the concurrent structure of these nets is provided under the form of a NUPN (Nested-Unit Petri Net) "tool specific" element in the PNML file. This information can be exploited by tools at will. For coloured Petri Nets, equivalent P/T nets are provided in mostcases.

Two classes of models are provided:

Models can be downloaded from here. They all come with a model form giving a brief description and in PNML format. Some models have parameters that change some aspects (mostly the size and the complexity of the specification, but sometimes to exhibit variants in behaviours).

In 2025, 142 models are available, that makes 1855 instances by expanding their parameters.

Formulas in the Model Checking Contest Benchmark Suite

The models also come with formulas used for some of the MCC competitions. In particular, there are:

Formulas are generated randomly and consolidated to be as complex as possible. Since every year brings new models, and considering that formulas are regenerated every year, some models can be associated with a large number of formulas.

Access to formulas is provided here:

Formulas are expressed in XML. Some documentation about the XML representation is available in the submission kit. Many tools participating to the Model Checking Contest have shared libraries to parse this format. You may avoid writing a new parser by contacting the developers.

Please note that for each year, a comprehensive report of results is computed in the form of a CSV file : raw-result-analysis.csv. The "majority result" is also reported ; it is almost in every case the correct one (only a few counter examples among hundreds of thousand formulas). Each estimated result is associated with a rating. Here is a direct access to this CSV file for the most recent years:

The VLPN Benchmark Suite

This is a collection of 350 Petri nets (called models) to be used as benchmarks in scientific experiments. These models have the following characteristics:

Note that some of the models in the VLPN benchmark suite are also located in the Model Checking Contest Benchmark Suite.

This benchmark suite is provided by the CONVECS team (inria), and more precisely, the developers of the CADP tool.