Tools used for the organization of the SMT competition
- Github repository: https://github.com/smtcomp/smtcomp.github.io/
- Documentation https://smtcomp.github.io/smtcomp/
git clone [email protected]:smtcomp/smtcomp.github.io.git
Finally, install the environment with
make install
Edit the file smtcomp/defs.py
, in particular Config.current_year
, Logic
for adding new logics and tracks
for new divisions.
Download the new benchmarks from zenodo, unpack them, unpack the .tar.zst, you should get something like:
$DIR/zenodo
├── incremental
│ ├── ABVFPLRA
│ ├── ALIA
│ ...
│ ├── UFNIA
│ └── UFNRA
└── non-incremental
├── ABV
├── ABVFP
├── ABVFPLRA
├── ALIA
├── ANIA
├── AUFBV
...
├── UFFPDTNIRA
├── UFIDL
├── UFLIA
├── UFLRA
├── UFNIA
└── UFNIRA
Then you can run (very io intensive):
smtcomp $DIR/zenodo ./data/
The directory ./data/
is the one present in this repository
The list of benchmarks and the previous results are in json which are human readable, but slow to parse (1min). So locally the tool use the feather format. The feather files are generated with:
smtcomp create-cache ./data/
Working with the feather files with polars is very fast, so no more intermediate files are needed.
However statistics can be shown, for example for the selection of single track:
smtcomp show-sq-selection-stats ./data/ 0
Which outputs:
Statistics on the benchmark selection for single query
┏━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━┓
┃ Logic ┃ trivial ┃ not trivial ┃ old never ran ┃ new ┃ selected ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━┩
│ ABV │ 0 │ 2573 │ 2402 │ 0 │ 2487 │
│ ABVFP │ 0 │ 60 │ 0 │ 0 │ 60 │
│ ABVFPLRA │ 0 │ 77 │ 0 │ 0 │ 77 │
│ ALIA │ 23 │ 1545 │ 1530 │ 0 │ 1537 │
│ ANIA │ 0 │ 56 │ 0 │ 22 │ 78 │
│ AUFBV │ 0 │ 1333 │ 190 │ 0 │ 761 │
│ AUFBVDTLIA │ 115 │ 1434 │ 134 │ 0 │ 784 │
...
The final solvers submitted during the smtcomp 2023 can be used:
smtcomp convert-csv tests/solvers_divisions_final.csv ../tmp/submissions
The generated files can be visualized using:
smtcomp show ../tmp/submissions/YicesQS.json
The solver downloaded using:
smtcomp download-archive ../tmp/submissions/*.json ../tmp/benchexec/execution
Trivial tests benchmarks generated with:
smtcomp generate-benchmarks ../tmp/benchexec/includes/
The benchexec tasks generated using:
smtcomp generate-benchexec ../tmp/submissions/*.json ../tmp/benchexec/includes/all.xml ../tmp/benchexec/execution
The benchexec execution environment generated using:
smtcomp prepare-execution ../tmp/benchexec/execution
Benchexec started using:
(cd ../tmp/benchexec/execution; benchexec ../includes/all.xml --read-only-dir / --overlay-dir /home --full-access-dir .. --numOfThreads 8 -M 2GB -c 1)
Benchexec in verifier cloud started using:
(cd ../tmp/benchexec/execution; PATH_TO_BENCHEXEC/contrib/vcloud-benchmark.py ../includes/all.xml --read-only-dir / --overlay-dir /home --full-access-dir .. --numOfThreads 8 -M 2GB -c 1 --vcloudMaster VCLOUD_MASTER --vcloudClientHeap 500)
Repository initiated with fpgmaas/cookiecutter-poetry.