Experimental Data for "On Confluence of Parallel-Innermost Term Rewriting"

We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures. We propose a simple sufficient criterion for confluence of parallel-innermost rewriting based on non-overlappingness. Our experiments on a large benchmark set indicate the practical usefulness of our criterion. We close with a challenge to the community to develop more powerful dedicated techniques for this problem.

On this web page, we provide the experimental data supporting our claims.


Benchmarks

Here we provide the benchmark collections that we used for our experiments.

The source of the benchmarks for our first experiment is the Termination Problem Data Base (TPDB), version 11.2. We considered the 663 benchmarks for (sequential) runtime complexity of innermost rewriting in the subdirectory Runtime_Complexity_Innermost_Rewriting. Our benchmark files from the TPDB are available here: benchmarks_tpdb.zip

These files are in the (human-readable) format used in the Termination Competition in earlier years.

There are some annotations in the files that are not considered for our specific analysis goal:

For our second experiment, we used the 570 unsorted unconditional TRSs of COPS (version of 28 June 2022), the benchmark collection used in the Confluence Competition. The file format is the same as above, except that the above annotations are absent. The TRSs in these files are normally analysed for confluence of full rewriting. In contrast, we analyse them for confluence of parallel-innermost rewriting.

Our benchmark files from COPS are available here: benchmarks_cops.zip


Experimental results

We used the implementation of the non-overlappingness check in the program analysis tool AProVE. Non-overlappingness of a term rewrite system R implies that the parallel-innermost rewrite relation of R is confluent.

In our first experiment using the TRSs obtained from the TPDB, 447 out of the 663 TRSs in our benchmark collection were found to be non-overlapping. Details of our experiments on the TRSs from the TPDB are available here: confluence_analysis_tpdb.txt

Thus, with about 67.4%, we have achieved a good "baseline" for confluence analysis of parallel-innermost rewriting. At the same time, these results also indicate that there is still work to be done to develop more powerful techniques for proving (and disproving!) confluence of parallel-innermost rewriting.

In our second experiment using the TRSs obtained from COPS, only 115 out of the 570 TRSs (about 20.2%) were found to be overlapping. Details of our experiments on the TRSs from COPS are available here: confluence_analysis_cops.txt

The significantly lower success rate for this benchmark set is not surprising: COPS collects TRSs that provide a challenge to confluence analysis tools, whereas the analysed subset of the TPDB contains TRSs which are interesting specifically for runtime complexity analysis.